To see the other types of publications on this topic, follow the link: Automated identification.

Dissertations / Theses on the topic 'Automated identification'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Automated identification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chen, Chun-Cheng Richard 1977. "Automated cardiovascular system identification." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/81537.

Full text
Abstract:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (p. 64-65).
by Chun-Cheng Chen.
S.B.and M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
2

Wong, Poh Lee. "Automated fish detection and identification." Thesis, La Rochelle, 2015. http://www.theses.fr/2015LAROS009.

Full text
Abstract:
L’utilisation de techniques informatiques pour la reconnaissance et l'identification des poissons est devenue assez populaire parmi les chercheurs. Ces nouvelles approches sont importantes, puisque les informations extraites sur les poissons telles que leurs trajectoires, leurs positions ou leurs couleurs, permettent de déterminer si les poissons sont en bonne santé ou en état de stress. Les méthodes existantes ne sont pas assez précises notamment lorsque des éléments tels que les bulles ou des zones éclairées peuvent être identifiées comme étant des poissons. De plus, les taux de reconnaissance et d'identification des systèmes existants peuvent encore être améliorés afin d’obtenir des résultats à la fois meilleurs et plus précis. Afin d’obtenir de meilleurs taux de reconnaissance et d'identification, un système amélioré a été construit en combinant plusieurs méthodes de détection et d’analyse. Tout d'abord, la première étape a consisté à proposer une méthode de suivi d'objets dans le but de localiser en temps réel la position des poissons à partir de vidéos. Celle-ci inclut le suivi automatisé multi-cibles de poissons dans un aquarium. Les performances en termes de détection et d’identification risquaient d’être faibles notamment en raison du processus de suivi dans un environnement temps réel. Une méthode de suivi des poissons plus précise est donc proposée ainsi qu'une méthode complète pour identifier et détecter les modèles de nage des poissons. Dans ces travaux, nous proposons, pour le suivi des poissons, une amélioration de l’algorithme du filtre particulaire en l’associant à un algorithme de détection de mouvement. Un système doté de deux caméras est également proposé afin d'obtenir un meilleur taux de détection. La seconde étape comprend la conception et le développement d'une méthode améliorée pour le recadrage et la segmentation dynamique des images dans un environnement temps réel. Ce procédé est proposé pour extraire de la vidéo les images représentant les poissons en éliminant les éléments provenant de l’arrière-plan. La troisième étape consiste à caractériser les objets (les poissons). La méthode proposée est basée sur des descripteurs utilisant la couleur pour caractériser les poissons. Ces descripteurs sont ensuite utilisés dans la suite des traitements. Dans nos travaux, les descripteurs couleurs généralisés de Fourier (GCFD : Generalized Color Fourier Descriptor) sont utilisés et une adaptation basée sur la détection de l’environnement est proposée afin d’obtenir une identification plus précise des poissons. Une méthode de mise en correspondance basée sur un calcul de distance est utilisée pour comparer les vecteurs de caractéristiques des images segmentées afin de classifier les poissons présents dans la vidéo. Un prototype dont le but est de modéliser les profils de nage des poissons a été développé. Celui-ci intègre toutes les méthodes proposées et a permis d’évaluer la validité de notre approche. Les résultats montrent que les méthodes proposées améliorent la reconnaissance et l’identification en temps réel des poissons. La méthode de suivi proposée montre une amélioration par rapport au procédé basé sur le filtre particulaire classique. Le recadrage dynamique et la méthode de segmentation temps-réel présentent en termes de précision un pourcentage moyen de 84,71%. La méthode de caractérisation des objets développée pour reconnaitre et identifier en temps réel les poissons montre également une amélioration par rapport aux descripteurs couleurs classiques. Le travail réalisé peut trouver une application directe auprès des aquaculteurs afin de suivre en temps réel et de manière automatique le comportement des poissons et éviter ainsi un suivi « visuel » tel qu’il est réalisé actuellement
Recognition and identification of fish using computational methods have increasingly become a popular research endeavour among researchers. The methods are important as the information displayed by the fish such as trajectory patterns, location and colour could determine whether the fish are healthy or under stress. Current methods are not accurate especially when there exist thresholds such as bubbles and some lighted areas which might be identified as fish. Besides, the recognition and identification rate of the existing systems can still be improved to obtain better and more accurate results. In order to achieve a better recognition and identification rate, an improved scheme consisting of a combination of several methods is constructed. First of all, the first approach is to propose an object tracking method for the purpose of locating the position of fish for real-time videos. This includes the consideration of tracking multiple fish in a single tank in an automated way. The detection and identification rate may be slow due to the on-going tracking process especially in a real-time environment. A more accurate fish tracking method is proposed as well as a systematic method to identify and detect fish swimming patterns. In this research, the particle filter algorithm is enhanced and further combined with the motion detection algorithm for fish tracking. A dual camera system is also proposed to obtain better detection rate. The second approach includes the design and development of an enhanced method for dynamically cropping and segmenting images in real-time environment. This method is proposed to extract each image of the fish from every successive video frame to reduce the tendency of detecting the background as an object. The third approach includes an adapted object characterisation method which utilises colour feature descriptors to represent the fish in a computational form for further processing. In this study, an object characterisation method, GCFD (Generalized Colour Fourier Descriptor) is adapted to suit the environment for more accurate identification of the fish. A feature matching method based on distance matching is used to match the feature vectors of the segmented images for classifying the specific fish in the recorded video. In addition, a real-time prototype system which models the fish swimming pattern incorporating all the proposed methods is developed to evaluate the methods proposed in this study. Based on the results, the proposed methods show improvements which result in a better real-time fish recognition and identification system. The proposed object tracking method shows improvement over the original particle filter method. Based on the average percentage in terms of the accuracy for the dynamic cropping and segmentation method in real time, an acceptable value of 84.71% was recorded. The object characterisation method which is adapted for fish recognition and identification in real time shows an improvement over existing colour feature descriptors. As a whole, the main output of this research could be used by aquaculturist to track and monitor fish in the water computationally in real-time instead of manually
APA, Harvard, Vancouver, ISO, and other styles
3

Waly, Hashem. "Automated Fault Identification - Kernel Trace Analysis." Thesis, Université Laval, 2011. http://www.theses.ulaval.ca/2011/28246/28246.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Silva, Bruno Miguel Santos Antunes. "Automated acoustic identification of bat species." Master's thesis, Universidade de Évora, 2013. http://hdl.handle.net/10174/9101.

Full text
Abstract:
Automated acoustic identification of bat species Recent improvements in bat survey methods in Portugal, especially automatic recording stations, have led to an analysis problem due to the amount of data obtained. In this thesis we propose to develop an automated analysis and classification method for bat echolocation calls by developing a computer program based on statistical models and using a reference database of bat calls recorded in Portugal to quickly analyze and classify large amounts of recordings. We recorded 2968 calls from 748 bats of 20 (of the 25) bat species known in mainland Portugal and coded a program in R that automatically detects bat calls in a recording, isolates the calls from the background noise and measures 19 parameters from each call. A two stage hierarchical classification bat call scheme was implemented based on logistic regression models and ensembles of artificial neural networks. In the first stage calls were classified in six major groups with individual correct classification rates that varied between 93% and 100%. In the second stage calls were classified in species or groups of species with classification rates that varied between 50% and 100%; ### Identificação acústica automatizada de espécies de morcegos Desenvolvimentos recentes nas metodologias de monitorização de morcegos utilizadas em Portugal, especialmente estações de gravação automáticas, conduziram a um problema de análise devido à quantidade de dados obtida. Nesta tese propomos desenvolver um método automatizado de análise e classificação de pulsos de ecolocalização de morcegos através do desenvolvimento de um programa de computador baseado em modelos estatísticos e utilizando uma base de dados de pulsos de morcegos gravados em Portugal continental para rapidamente analisar e classificar grandes quantidades de gravações. Gravámos 2968 pulsos de 748 morcegos de 20 (das 25) espécies de morcegos conhecidas em Portugal continental e codificámos em R um programa para automaticamente detectar pulsos de morcego numa gravação, isolar os pulsos do ruído de fundo e medir 19 parâmetros de cada pulso. Foi implementado um esquema hierárquico de classificação de pulsos em duas etapas baseado em modelos de regressão logística e conjuntos de redes neuronais artificiais. Numa primeira etapa os pulsos foram classificados em seis grupos com taxas individuais de classificações correctas que variaram entre 93% e 100%. Numa segunda fase os pulsos foram classificados em espécies ou grupos de espécies com taxas de classificação correctas que variaram entre 50% e 100%.
APA, Harvard, Vancouver, ISO, and other styles
5

Moody, Sarah Jean. "Automated Data Type Identification And Localization Using Statistical Analysis Data Identification." DigitalCommons@USU, 2008. https://digitalcommons.usu.edu/etd/9.

Full text
Abstract:
This research presents a new and unique technique called SÁDI, statistical analysis data identification, for identifying the type of data on a digital device and its storage format based on data type, specifically the values of the bytes representing the data being examined. This research incorporates the automation required for specialized data identification tools to be useful and applicable in real-world applications. The SÁDI technique utilizes the byte values of the data stored on a digital storage device in such a way that the accuracy of the technique does not rely solely on the potentially misleading metadata information but rather on the values of the data itself. SÁDI provides the capability to identify what digitally stored data actually represents. The identification of the relevancy of data is often dependent upon the identification of the type of data being examined. Typical file type identification is based upon file extensions or magic keys. These typical techniques fail in many typical forensic analysis scenarios, such as needing to deal with embedded data, as in the case of Microsoft Word files or file fragments. These typical techniques for file identification can also be easily circumvented, and individuals with nefarious purposes often do so.
APA, Harvard, Vancouver, ISO, and other styles
6

Siricharoen, Punnarai. "Plant disease identification using automated image analysis." Thesis, Ulster University, 2016. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.725343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hetherington, Jorden Hicklin. "Automated lumbar vertebral level identification using ultrasound." Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/62945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Estrada, Vargas Ana Paula. "Black-Box identification of automated discrete event systems." Thesis, Cachan, Ecole normale supérieure, 2013. http://www.theses.fr/2013DENS0006/document.

Full text
Abstract:
Cette thèse traite de l'identification des systèmes à événements discrets (SED) automatisés dans un contexte industriel. En particulier, le travail aborde les systèmes formés par un processus et un automate programmable (AP) fonctionnant en boucle fermée - l'identification a pour but d’obtenir un modèle approximatif exprimé en réseaux de Petri interprétés (RPI) à partir du comportement externe observé sous la forme d'une seule séquence de vecteurs d’entrée-sortie de l’AP. Tout d'abord, une analyse des méthodes d'identification est présentée, ainsi qu’une étude comparative des méthodes récentes pour l'identification des SED. Puis le problème abordé est décrit - des importantes caractéristiques technologiques dans les systèmes automatisés par l’AP sont détaillées. Ces caractéristiques doivent être prises en compte dans la résolution du problème, mais elles ne peuvent pas être traitées par les méthodes existantes d’identification. La contribution principale de cette thèse est la création de deux méthodes d’identification complémentaires. La première méthode permet de construire systématiquement un modèle RPI à partir d'une seule séquence entrée-sortie représentant le comportement observable du SED. Les modèles RPI décrivent en détail l’évolution des entrées et sorties pendant le fonctionnement du système. La seconde méthode considère des SED grands et complexes - elle est basée sur une approche statistique qui permettre la construction des modèles en RPI compactes et expressives. Elle est composée de deux étapes - la première calcule à partir de la séquence entrée-sortie, la partie réactive du modèle, constituée de places observables et de transitions. La deuxième étape fait la construction de la partie non-observable, en rajoutant des places pour permettre la reproduction de la séquence entrée-sortie. Les méthodes proposées, basées sur des algorithmes de complexité polynomiale, ont été implémentées en outils logiciels, lesquels ont été testés avec des séquences d’entrée-sortie obtenues à partir des systèmes réels en fonctionnement. Les outils sont décrits et leur application est illustrée à travers deux cas d’étude
This thesis deals with the identification of automated discrete event systems (DES) operating in an industrial context. In particular the work focuses on the systems composed by a plant and a programmable logic controller (PLC) operating in a closed loop- the identification consists in obtaining an approximate model expressed in interpreted Petri nets (IPN) from the observed behaviour given under the form of a single sequence of input-output vectors of the PLC. First, an overview of previous works on identification of DES is presented as well as a comparative study of the main recent approaches on the matter. Then the addressed problem is stated- important technological characteristics of automated systems and PLC are detailed. Such characteristics must be considered in solving the identification problem, but they cannot be handled by previous identification techniques. The main contribution in this thesis is the creation of two complementary identification methods. The first method allows constructing systematically an IPN model from a single input-output sequence representing the observable behaviour of the DES. The obtained IPN models describe in detail the evolution of inputs and outputs during the system operation. The second method has been conceived for addressing large and complex industrial DES- it is based on a statistical approach yielding compact and expressive IPN models. It consists of two stages- the first one obtains, from the input-output sequence, the reactive part of the model composed by observable places and transitions. The second stage builds the non observable part of the model including places that ensure the reproduction of the observed input-output sequence. The proposed methods, based on polynomial-time algorithms, have been implemented in software tools, which have been tested with input-output sequences obtained from real systems in operation. The tools are described and their application is illustrated through two case studies
APA, Harvard, Vancouver, ISO, and other styles
9

Estrada, Vargas Ana Paula, and Vargas Ana Paula Estrada. "Black-Box identification of automated discrete event systems." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2013. http://tel.archives-ouvertes.fr/tel-00846194.

Full text
Abstract:
This thesis deals with the identification of automated discrete event systems (DES) operating in an industrial context. In particular the work focuses on the systems composed by a plant and a programmable logic controller (PLC) operating in a closed loop- the identification consists in obtaining an approximate model expressed in interpreted Petri nets (IPN) from the observed behaviour given under the form of a single sequence of input-output vectors of the PLC. First, an overview of previous works on identification of DES is presented as well as a comparative study of the main recent approaches on the matter. Then the addressed problem is stated- important technological characteristics of automated systems and PLC are detailed. Such characteristics must be considered in solving the identification problem, but they cannot be handled by previous identification techniques. The main contribution in this thesis is the creation of two complementary identification methods. The first method allows constructing systematically an IPN model from a single input-output sequence representing the observable behaviour of the DES. The obtained IPN models describe in detail the evolution of inputs and outputs during the system operation. The second method has been conceived for addressing large and complex industrial DES- it is based on a statistical approach yielding compact and expressive IPN models. It consists of two stages- the first one obtains, from the input-output sequence, the reactive part of the model composed by observable places and transitions. The second stage builds the non observable part of the model including places that ensure the reproduction of the observed input-output sequence. The proposed methods, based on polynomial-time algorithms, have been implemented in software tools, which have been tested with input-output sequences obtained from real systems in operation. The tools are described and their application is illustrated through two case studies.
APA, Harvard, Vancouver, ISO, and other styles
10

Duncan-Drake, Natasha. "Exploiting human expert techniques in automated writer identification." Thesis, University of Kent, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Farr, Ian John. "Automated bioacoustic identification of statutory quarantined insect pests." Thesis, University of York, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.437593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Waldrop, James Luke 1977. "Local control in a distributed automated identification environment." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86842.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (p. 74-75).
by James Luke Waldrop, III.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
13

Neamatullah, Ishna. "Automated de-identification of free-text medical records." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/41622.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
Includes bibliographical references (p. 62-64).
This paper presents a de-identification study at the Harvard-MIT Division of Health Science and Technology (HST) to automatically de-identify confidential patient information from text medical records used in intensive care units (ICUs). Patient records are a vital resource in medical research. Before such records can be made available for research studies, protected health information (PHI) must be thoroughly scrubbed according to HIPAA specifications to preserve patient confidentiality. Manual de-identification on large databases tends to be prohibitively expensive, time-consuming and prone to error, making a computerized algorithm an urgent need for large-scale de-identification purposes. We have developed an automated pattern-matching deidentification algorithm that uses medical and hospital-specific information. The current version of the algorithm has an overall sensitivity of around 0.87 and an approximate positive predictive value of 0.63. In terms of sensitivity, it performs significantly better than 1 person (0.81) but not quite as well as a consensus of 2 human de-identifiers (0.94). The algorithm will be published as open-source software, and the de-identified medical records will be incorporated into HST's Multi-Parameter Intelligent Monitoring for Intensive Care (MIMIC II) physiologic database.
by Ishna Neamatullah.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
14

Loomis, Nicholas C. (Nicholas Charles). "Computational imaging and automated identification for aqueous environments." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/67589.

Full text
Abstract:
Thesis (Ph. D.)--Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Mechanical Engineering; and the Woods Hole Oceanographic Institution), 2011.
"June 2011." Cataloged from PDF version of thesis.
Includes bibliographical references (p. 253-293).
Sampling the vast volumes of the ocean requires tools capable of observing from a distance while retaining detail necessary for biology and ecology, ideal for optical methods. Algorithms that work with existing SeaBED AUV imagery are developed, including habitat classification with bag-of-words models and multi-stage boosting for rock sh detection. Methods for extracting images of sh from videos of long-line operations are demonstrated. A prototype digital holographic imaging device is designed and tested for quantitative in situ microscale imaging. Theory to support the device is developed, including particle noise and the effects of motion. A Wigner-domain model provides optimal settings and optical limits for spherical and planar holographic references. Algorithms to extract the information from real-world digital holograms are created. Focus metrics are discussed, including a novel focus detector using local Zernike moments. Two methods for estimating lateral positions of objects in holograms without reconstruction are presented by extending a summation kernel to spherical references and using a local frequency signature from a Riesz transform. A new metric for quickly estimating object depths without reconstruction is proposed and tested. An example application, quantifying oil droplet size distributions in an underwater plume, demonstrates the efficacy of the prototype and algorithms.
by Nicholas C. Loomis.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
15

Ayob, Mohd Zaki. "Automated ladybird identification using neural and expert systems." Thesis, University of York, 2012. http://etheses.whiterose.ac.uk/4290/.

Full text
Abstract:
The concept of automated species identification is relatively recent and advances are being driven by technological advances and the taxonomic impediment. This thesis describes investigations into the automated identification of ladybird species from colour images provided by the public, with an eventual aim of implementing an online identification system. Such images pose particularly difficult problems with regards to image processing as the insects have a highly domed shape and not all relevant features (e.g. spots) are visible or are fore-shortened. A total of 7 species of ladybird have been selected for this work; 6 native species to the UK and 3 colour forms of the Harlequin ladybird (Harmonia axyridis), the latter because of its pest status. Work on image processing utilised 6 geometrical features obtained using greyscale operations, and 6 colour features which were obtained using CIELAB colour space representation. Overall classifier results show that inter-species identification is a success; the system is able to, among all, correctly identify Calvia 14-guttata from Halyzia 16-guttata to 100% accuracy and Exochomus 4-pustulatus from H. axyridis f. spectabilis to 96.3% accuracy using Multilayer Perceptron and J48 decision trees. Intra-species identification of H. axyridis shows that H. axyridis f. spectabilis can be identified correctly up to 72.5% against H. axyridis f. conspicua, and 98.8% correct against H. axyridis f. succinea. System integration tests show that through the addition of user interaction, the identification between Harlequins and non-Harlequins can be improved from 18.8% to 75% accuracy.
APA, Harvard, Vancouver, ISO, and other styles
16

Buck, Arlene J. "Automated knowledge acquisition tool for identification of generic tasks /." Online version of thesis, 1990. http://hdl.handle.net/1850/10577.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Cazares, Shelley Marie. "Automated identification of abnormal patterns in the intrapartum cardiotocogram." Thesis, University of Oxford, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.289363.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Dai, Jing. "Automated identification of insect taxa using structural image processing." Thesis, University of York, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.444260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Clark, Jessica Celeste. "Automated Identification of Adverbial Clauses in Child Language Samples." Diss., CLICK HERE for online access, 2009. http://contentdm.lib.byu.edu/ETD/image/etd2803.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Brown, Brittany Cheree. "Automated Identification of Adverbial Clauses in Child Language Samples." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3404.

Full text
Abstract:
Adverbial clauses are grammatical constructions that are of relevance in both typical language development and impaired language development. In recent years, computer software has been used to assist in the automated analysis of clinical language samples. This software has attempted to accurately identify adverbial clauses with limited success. The present study investigated the accuracy of software for the automated identification of adverbial clauses. Two separate collections of language samples were used. One collection included 10 children with language impairment, with ages ranging from 7;6 to 11;1 (years;months), 10 age-matched peers,and 10 language-matched peers. A second collection contained 30 children ranging from 2;6 to 7;11 in age, with none considered to have language or speech impairments. Language sample utterances were manually coded for the presence of adverbial clauses (both finite and non-finite). Samples were then automatically tagged using the computer software. Results were tabulated and compared for accuracy. ANOVA revealed differences in frequencies of so-adverbial clauses whereas ANACOVA revealed differences in frequencies of both types of finite adverbial clauses. None of the structures were significantly correlated with age; however, frequencies of both types of finite adverbial clauses were correlated with mean length of utterance. Kappa levels revealed that agreement between manual and automated coding was high on both types of finite adverbial clauses.
APA, Harvard, Vancouver, ISO, and other styles
21

Michaelis, Hali Anne. "Automated Identification of Relative Clauses in Child Language Samples." BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/1997.

Full text
Abstract:
Previously existing computer analysis programs have been unable to correctly identify many complex syntactic structures thus requiring further manual analysis by the clinician. Complex structures, including the relative clause, are of interest in child language samples due to the difference in development between children with and without language impairment. The purpose of this study was to assess the comparability of results from a new automated program, Cx, to results from manual identification of relative clauses. On language samples from 10 children with language impairment (LI), 10 language matched peers (LA), and 10 chronologically age matched peers (CA), a computerized analysis based on probabilities of sequences of grammatical markers agreed with a manual analysis with a Kappa of 0.88.
APA, Harvard, Vancouver, ISO, and other styles
22

Manning, Britney Richey. "Automated Identification of Noun Clauses in Clinical Language Samples." BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/2197.

Full text
Abstract:
The identification of complex grammatical structures including noun clauses is of clinical importance because differences in the use of these structures have been found between individuals with and without language impairment. In recent years, computer software has been used to assist in analyzing clinical language samples. However, this software has been unable to accurately identify complex syntactic structures such as noun clauses. The present study investigated the accuracy of new software, called Cx, in identifying finite wh- and that-noun clauses. Two sets of language samples were used. One set included 10 children with language impairment, 10 age-matched peers, and 10 language-matched peers. The second set included 40 adults with mental retardation. Levels of agreement between computerized and manual analysis were similar for both sets of language samples; Kappa levels were high for wh-noun clauses and very low for that-noun clauses.
APA, Harvard, Vancouver, ISO, and other styles
23

Ehlert, Erika E. "Automated Identification of Relative Clauses in Child Language Samples." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3615.

Full text
Abstract:
Relative clauses are grammatical constructions that are of relevance in both typical and impaired language development. Thus, the accurate identification of these structures in child language samples is clinically important. In recent years, computer software has been used to assist in the automated analysis of clinical language samples. However, this software has had only limited success when attempting to identify relative clauses. The present study explores the development and clinical importance of relative clauses and investigates the accuracy of the software used for automated identification of these structures. Two separate collections of language samples were used. The first collection included 10 children with language impairment, ranging in age from 7;6 to 11;1 (years;months), 10 age-matched peers, and 10 language-matched peers. A second collection contained 30 children considered to have typical speech and language skills and who ranged in age from 2;6 to 7;11. Language samples were manually coded for the presence of relative clauses (including those containing a relative pronoun, those without a relative pronoun and reduced relative clauses). These samples were then tagged using computer software and finally tabulated and compared for accuracy. ANACOVA revealed a significant difference in the frequency of relative clauses containing a relative pronoun but not for those without a relative pronoun nor for reduce relative clauses. None of the structures were significantly correlated with age; however, frequencies of both relative clauses with and without relative pronouns were correlated with mean length of utterance. Kappa levels revealed that agreement between manual and automated coding was relatively high for each relative clause type and highest for relative clauses containing relative pronouns.
APA, Harvard, Vancouver, ISO, and other styles
24

Mohammed, Hussam J. "Automated identification of digital evidence across heterogeneous data resources." Thesis, University of Plymouth, 2018. http://hdl.handle.net/10026.1/12839.

Full text
Abstract:
Digital forensics has become an increasingly important tool in the fight against cyber and computer-assisted crime. However, with an increasing range of technologies at people's disposal, investigators find themselves having to process and analyse many systems with large volumes of data (e.g., PCs, laptops, tablets, and smartphones) within a single case. Unfortunately, current digital forensic tools operate in an isolated manner, investigating systems and applications individually. The heterogeneity and volume of evidence place time constraints and a significant burden on investigators. Examples of heterogeneity include applications such as messaging (e.g., iMessenger, Viber, Snapchat, and WhatsApp), web browsers (e.g., Firefox and Google Chrome), and file systems (e.g., NTFS, FAT, and HFS). Being able to analyse and investigate evidence from across devices and applications in a universal and harmonized fashion would enable investigators to query all data at once. In addition, successfully prioritizing evidence and reducing the volume of data to be analysed reduces the time taken and cognitive load on the investigator. This thesis focuses on the examination and analysis phases of the digital investigation process. It explores the feasibility of dealing with big and heterogeneous data sources in order to correlate the evidence from across these evidential sources in an automated way. Therefore, a novel approach was developed to solve the heterogeneity issues of big data using three developed algorithms. The three algorithms include the harmonising, clustering, and automated identification of evidence (AIE) algorithms. The harmonisation algorithm seeks to provide an automated framework to merge similar datasets by characterising similar metadata categories and then harmonising them in a single dataset. This algorithm overcomes heterogeneity issues and makes the examination and analysis easier by analysing and investigating the evidential artefacts across devices and applications based on the categories to query data at once. Based on the merged datasets, the clustering algorithm is used to identify the evidential files and isolate the non-related files based on their metadata. Afterwards, the AIE algorithm tries to identify the cluster holding the largest number of evidential artefacts through searching based on two methods: criminal profiling activities and some information from the criminals themselves. Then, the related clusters are identified through timeline analysis and a search of associated artefacts of the files within the first cluster. A series of experiments using real-life forensic datasets were conducted to evaluate the algorithms across five different categories of datasets (i.e., messaging, graphical files, file system, internet history, and emails), each containing data from different applications across different devices. The results of the characterisation and harmonisation process show that the algorithm can merge all fields successfully, with the exception of some binary-based data found within the messaging datasets (contained within Viber and SMS). The error occurred because of a lack of information for the characterisation process to make a useful determination. However, on further analysis, it was found that the error had a minimal impact on subsequent merged data. The results of the clustering process and AIE algorithm showed the two algorithms can collaborate and identify more than 92% of evidential files.
APA, Harvard, Vancouver, ISO, and other styles
25

Chen, Haijian. "Automated peak identification for time -of -flight mass spectroscopy." W&M ScholarWorks, 2006. https://scholarworks.wm.edu/etd/1539623489.

Full text
Abstract:
The high throughput capabilities of protein mass fingerprints measurements have made mass spectrometry one of the standard tools for proteomic research, such as biomarker discovery. However, the analysis of large raw data sets produced by the time-of-flight (TOF) spectrometers creates a bottleneck in the discovery process. One specific challenge is the preprocessing and identification of mass peaks corresponding to important biological molecules. The accuracy of mass assignment is another limitation when comparing mass fingerprints with databases.;We have developed an automated peak picking algorithm based on a maximum likelihood approach that effectively and efficiently detects peaks in a time-of-flight secondary ion mass spectrum. This approach produces maximum likelihood estimates of peak positions and amplitudes, and simultaneously develops estimates of the uncertainties in each of these quantities. We demonstrate that a Poisson process is involved for time-of-flight secondary ion mass spectrometry (TOF-SIMS) and the algorithm takes the character of the Poisson noise into account.;Though this peak picking algorithm was initially developed for TOF-SIMS spectra, it can be extended to other types of TOF spectra as soon as the correct noise characteristics are considered. We have developed a peak alignment procedure that aligns peaks in different spectra. This is a crucial step for multivariate analysis. Multivariate analysis is often used to distill useful information from complex spectra.;We have designed a TOF-SIMS experiment that consists of various mixtures of three bio-molecules as a model for more complicated biomarker discovery. The peak picking algorithm is applied to the collected spectra. The algorithm detects peaks in the spectra repeatably and accurately. We also show that there are patterns in the spectra of pure biomolecules samples. Furthermore, we show it is possible to infer the concentration ratios in the mixture samples by checking the strength of the patterns.
APA, Harvard, Vancouver, ISO, and other styles
26

Goss, Ryan Gavin. "APIC: A method for automated pattern identification and classification." Doctoral thesis, University of Cape Town, 2017. http://hdl.handle.net/11427/27025.

Full text
Abstract:
Machine Learning (ML) is a transformative technology at the forefront of many modern research endeavours. The technology is generating a tremendous amount of attention from researchers and practitioners, providing new approaches to solving complex classification and regression tasks. While concepts such as Deep Learning have existed for many years, the computational power for realising the utility of these algorithms in real-world applications has only recently become available. This dissertation investigated the efficacy of a novel, general method for deploying ML in a variety of complex tasks, where best feature selection, data-set labelling, model definition and training processes were determined automatically. Models were developed in an iterative fashion, evaluated using both training and validation data sets. The proposed method was evaluated using three distinct case studies, describing complex classification tasks often requiring significant input from human experts. The results achieved demonstrate that the proposed method compares with, and often outperforms, less general, comparable methods designed specifically for each task. Feature selection, data-set annotation, model design and training processes were optimised by the method, where less complex, comparatively accurate classifiers with lower dependency on computational power and human expert intervention were produced. In chapter 4, the proposed method demonstrated improved efficacy over comparable systems, automatically identifying and classifying complex application protocols traversing IP networks. In chapter 5, the proposed method was able to discriminate between normal and anomalous traffic, maintaining accuracy in excess of 99%, while reducing false alarms to a mere 0.08%. Finally, in chapter 6, the proposed method discovered more optimal classifiers than those implemented by comparable methods, with classification scores rivalling those achieved by state-of-the-art systems. The findings of this research concluded that developing a fully automated, general method, exhibiting efficacy in a wide variety of complex classification tasks with minimal expert intervention, was possible. The method and various artefacts produced in each case study of this dissertation are thus significant contributions to the field of ML.
APA, Harvard, Vancouver, ISO, and other styles
27

Rothman, Keith Eric. "Validation of Linearized Flight Models using Automated System-Identification." DigitalCommons@CalPoly, 2009. https://digitalcommons.calpoly.edu/theses/81.

Full text
Abstract:
Optimization based flight control design tools depend on automatic linearization tools, such as Simulink®’s LINMOD, to extract linear models. In order to ensure the usefulness and correctness of the generated linear model, this linearization must be accurate. So a method of independently verifying the linearized model is needed. This thesis covers the automation of a system identification tool, CIFER®, for use as a verification tool integrated with CONDUIT®, an optimization based design tool. Several test cases are built up to demonstrate the accuracy of the verification tool with respect to analytical results and matches with LINMOD. Several common nonlinearities are tested, comparing the results from CIFER and LINMOD, as well as analytical results where possible. The CIFER results show excellent agreement with analytical results. LINMOD treated most nonlinearity as a unit gain, but some nonlinearities linearized to a zero, causing the linearized model to omit that path. Although these effects are documented within Simulink, their presence may be missed by a user. The verification tool is successful in identifying these problems when present. A section is dedicated to the diagnosis of linearization errors, suggesting solutions where possible.
APA, Harvard, Vancouver, ISO, and other styles
28

Cannon, Robert William. "Automated Spectral Identification of Materials using Spectral Identity Mapping." Cleveland State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=csu1377031729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Croft, David. "Semi-automated co-reference identification in digital humanities collections." Thesis, De Montfort University, 2014. http://hdl.handle.net/2086/10491.

Full text
Abstract:
Locating specific information within museum collections represents a significant challenge for collection users. Even when the collections and catalogues exist in a searchable digital format, formatting differences and the imprecise nature of the information to be searched mean that information can be recorded in a large number of different ways. This variation exists not just between different collections, but also within individual ones. This means that traditional information retrieval techniques are badly suited to the challenges of locating particular information in digital humanities collections and searching, therefore, takes an excessive amount of time and resources. This thesis focuses on a particular search problem, that of co-reference identification. This is the process of identifying when the same real world item is recorded in multiple digital locations. In this thesis, a real world example of a co-reference identification problem for digital humanities collections is identified and explored. In particular the time consuming nature of identifying co-referent records. In order to address the identified problem, this thesis presents a novel method for co-reference identification between digitised records in humanities collections. Whilst the specific focus of this thesis is co-reference identification, elements of the method described also have applications for general information retrieval. The new co-reference method uses elements from a broad range of areas including; query expansion, co-reference identification, short text semantic similarity and fuzzy logic. The new method was tested against real world collections information, the results of which suggest that, in terms of the quality of the co-referent matches found, the new co-reference identification method is at least as effective as a manual search. The number of co-referent matches found however, is higher using the new method. The approach presented here is capable of searching collections stored using differing metadata schemas. More significantly, the approach is capable of identifying potential co-reference matches despite the highly heterogeneous and syntax independent nature of the Gallery, Library Archive and Museum (GLAM) search space and the photo-history domain in particular. The most significant benefit of the new method is, however, that it requires comparatively little manual intervention. A co-reference search using it has, therefore, significantly lower person hour requirements than a manually conducted search. In addition to the overall co-reference identification method, this thesis also presents: • A novel and computationally lightweight short text semantic similarity metric. This new metric has a significantly higher throughput than the current prominent techniques but a negligible drop in accuracy. • A novel method for comparing photographic processes in the presence of variable terminology and inaccurate field information. This is the first computational approach to do so.
APA, Harvard, Vancouver, ISO, and other styles
30

Cook, Thomas Charles. "The development of automated palmprint identification using major flexion creases." Thesis, University of Wolverhampton, 2012. http://hdl.handle.net/2436/241851.

Full text
Abstract:
Palmar flexion crease matching is a method for verifying or establishing identity. New methods of palmprint identification, that complement existing identification strategies, or reduce analysis and comparison times, will benefit palmprint identification communities worldwide. To this end, this thesis describes new methods of manual and automated palmar flexion crease identification, that can be used to identify palmar flexion creases in online palmprint images. In the first instance, a manual palmar flexion crease identification and matching method is described, which was used to compare palmar flexion creases from 100 palms, each modified 10 times to mimic some of the types of alterations that can be found in crime scene palmar marks. From these comparisons, using manual palmar flexion crease identification, results showed that when labelled within 10 pixels, or 3.5 mm, of the palmar flexion crease, a palmprint image can be identified with a 99.2% genuine acceptance rate and a 0% false acceptance rate. Furthermore, in the second instance, a new method of automated palmar flexion crease recognition, that can be used to identify palmar flexion creases in online palmprint images, is described. A modified internal image seams algorithm was used to extract the flexion creases, and a matching algorithm, based on kd-tree nearest neighbour searching, was used to calculate the similarity between them. Results showed that in 1000 palmprint images from 100 palms, when compared to manually identified palmar flexion creases, a 100% genuine acceptance rate was achieved with a 0.0045% false acceptance rate. Finally, to determine if automated palmar flexion crease recognition can be used as an effective method of palmprint identification, palmar flexion creases from two online palmprint image data sets, containing images from 100 palms and 386 palms respectively, were automatically extracted and compared. In the first data set, that is, for images from 100 palms, an equal error rate of 0.3% was achieved. In the second data set, that is, for images from 386 palms, an equal error rate of 0.415% was achieved.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhang, Sijie. "Integrating safety and BIM: automated construction hazard identification and prevention." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52235.

Full text
Abstract:
Safety of workers in the construction environment remains one of the greatest challenges faced by the construction industry today. Activity-based hazard identification and prevention is limited because construction safety information and knowledge tends to be scattered and fragmented throughout safety regulations, accident records, and experience. With the advancement of information technology in the building and construction industry, a missing link between effective activity-level construction planning and Building Information Modeling (BIM) becomes more evident. The objectives of this study are 1) to formalize the safety management knowledge and to integrate safety aspects into BIM, and 2) to facilitate activity-based hazard identification and prevention in construction planning. To start with, a Construction Safety Ontology is created to organize, store, and re-use construction safety knowledge. Secondly, activity-based workspace visualization and congestion identification methods are investigated to study the hazards caused by the interaction between activities. Computational algorithms are created to process and retrieve activity-based workspace parameters through location tracking data of workers collected by remote sensing technology. Lastly, by introducing workspace parameters into ontology and connecting the ontology with BIM, automated workspace analysis along with job hazard analysis are explored. Results indicate that potential safety hazards can be identified, recorded, analyzed, and prevented in BIM. This study integrates aspects of construction safety into current BIM workflow, which enables performing hazard identification and prevention early in the project planning phase.
APA, Harvard, Vancouver, ISO, and other styles
32

Osareh, Alireza. "Automated identification of diabetic retinal exudates and the optic disc." Thesis, University of Bristol, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.400400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Ducharme, Daniel N. "Machine Learning for the Automated Identification of Cyberbullying and Cyberharassment." Thesis, University of Rhode Island, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10259474.

Full text
Abstract:

Cyberbullying and cyberharassement are a growing issue that is straining the resources of human moderation teams. This is leading to an increase in suicide among the affected teens who are unable to get away from the harassment. By utilizing n-grams and support vector machines, this research was able to classify YouTube comments with an overall accuracy of 81.8%. This increased to 83.9% when utilizing retraining that added the misclassified comments to the training set. To accomplish this, a 350 comment balanced training set, with 7% of the highest entropy 3 length n-grams, and a polynomial kernel with the C error factor of 1, a degree of 2, and a Coef0 of 1 were used in the LibSVM implementation of the support vector machine algorithm. The 350 comments were also trimmed with a k-nearest neighbor algorithm where k was set to 4% of the training set size. With the algorithm designed to be heavily multi-threaded and capable of being run across multiple servers, the system was able to achieve that accuracy while classifying 3 comments per second, running on consumer grade hardware over Wi-Fi.

APA, Harvard, Vancouver, ISO, and other styles
34

Dharmaraj, Karthick. "Automated freeform assembly of threaded fasteners." Thesis, Loughborough University, 2015. https://dspace.lboro.ac.uk/2134/19624.

Full text
Abstract:
Over the past two decades, a major part of the manufacturing and assembly market has been driven by its customer requirements. Increasing customer demand for personalised products create the demand for smaller batch sizes, shorter production times, lower costs, and the flexibility to produce families of products - or different parts - with the same sets of equipment. Consequently, manufacturing companies have deployed various automation systems and production strategies to improve their resource efficiency and move towards right-first-time production. However, many of these automated systems, which are involved with robot-based, repeatable assembly automation, require component- specific fixtures for accurate positioning and extensive robot programming, to achieve flexibility in their production. Threaded fastening operations are widely used in assembly. In high-volume production, the fastening processes are commonly automated using jigs, fixtures, and semi-automated tools. This form of automation delivers reliable assembly results at the expense of flexibility and requires component variability to be adequately controlled. On the other hand, in low- volume, high- value manufacturing, fastening processes are typically carried out manually by skilled workers. This research is aimed at addressing the aforementioned issues by developing a freeform automated threaded fastener assembly system that uses 3D visual guidance. The proof-of-concept system developed focuses on picking up fasteners from clutter, identifying a hole feature in an imprecisely positioned target component and carry out torque-controlled fastening. This approach has achieved flexibility and adaptability without the use of dedicated fixtures and robot programming. This research also investigates and evaluates different 3D imaging technology to identify the suitable technology required for fastener assembly in a non-structured industrial environment. The proposed solution utilises the commercially available technologies to enhance the precision and speed of identification of components for assembly processes, thereby improving and validating the possibility of reliably implementing this solution for industrial applications. As a part of this research, a number of novel algorithms are developed to robustly identify assembly components located in a random environment by enhancing the existing methods and technologies within the domain of the fastening processes. A bolt identification algorithm was developed to identify bolts located in a random clutter by enhancing the existing surface-based matching algorithm. A novel hole feature identification algorithm was developed to detect threaded holes and identify its size and location in 3D. The developed bolt and feature identification algorithms are robust and has sub-millimetre accuracy required to perform successful fastener assembly in industrial conditions. In addition, the processing time required for these identification algorithms - to identify and localise bolts and hole features - is less than a second, thereby increasing the speed of fastener assembly.
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Haijian. "Automated Treetop Detection and Tree Crown Identification Using Discrete-return Lidar Data." Thesis, University of North Texas, 2013. https://digital.library.unt.edu/ark:/67531/metadc271858/.

Full text
Abstract:
Accurate estimates of tree and forest biomass are essential for a wide range of applications. Automated treetop detection and tree crown discrimination using LiDAR data can greatly facilitate forest biomass estimation. Previous work has focused on homogenous or single-species forests, while few studies have focused on mixed forests. In this study, a new method for treetop detection is proposed in which the treetop is the cluster center of selected points rather than the highest point. Based on treetop detection, tree crowns are discriminated through comparison of three-dimensional shape signatures. The methods are first tested using simulated LiDAR point clouds for trees, and then applied to real LiDAR data from the Soquel Demonstration State Forest, California, USA. Results from both simulated and real LiDAR data show that the proposed method has great potential for effective detection of treetops and discrimination of tree crowns.
APA, Harvard, Vancouver, ISO, and other styles
36

Kit, Oleksandr. "Automated identification of slums in Hyderabad using high resolution satellite imagery." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2014. http://dx.doi.org/10.18452/16911.

Full text
Abstract:
Slums bilden einen wesentlichen Bestandteil vieler Stadtregionen des globalen Südens, wobei Indien die höchste Zahl an Slumbewohnern beherbergt. Die internationalen Unterschiede in der Definition des Begriffs "Slum" sowie Mängel bei der Datenerfassung haben eine hohe Fehlerwahrscheinlichkeit bei der Aufnahme von Slumbevölkerungszahlen und -standorten in globalem, nationalem und städtischem Massstab zur Folge. Das Hauptziel dieser Dissertation besteht darin, eine Vorgehensweise zur automatischen Erkennung von Slums mit Hilfe von hochauflösenden Satellitenbildern zu entwickeln, und diese Methode in der indischen Metropole Hyderabad anzuwenden. Diese Arbeit entwickelt ein mehrstufiges Satellitenbildbearbeitungsverfahren, welches in der Lage ist, eine schnelle Slumerkennung in Hyderabad durchzuführen. Das Verfahren beruht auf dem Verhältnis zwischen einem bestimmten Bereich räumlicher Heterogenität, ausgedrückt durch Lakunarität, und der Wahrscheinlichkeit, dass die Struktur eines Gebietes der Oberflächenstruktur eines Slums entspricht. Die Anwendung der hier vorgeschlagenen Methode produzierte zum ersten Mal einen plausiblen, räumlich kohärenten und politisch unverzerrten Datensatz über Slumstandorte und Slumbevölkerung für das gesamte Stadtgebiet von Hyderabad. Die Ergebnisse verdeutlichen die Unstimmigkeiten bei der bisherigen Erfassung der Slumbevölkerungszahlen sowie bei der offiziellen Anerkennung von Slums. Die multitemporale Satellitenbildauswertung zeigt ein Wachstum der Slumbevölkerungszahlen im Grossraum Hyderabad an und bietet Einblick in den zeitlich-räumlichen Slumwachstumprozess zwischen den Jahren 2003 und 2010. Diese Dissertation stellt einen wissenschaftlichen Beitrag zu den Themen Fernerkundung der Siedlungen und fortgeschrittene Bildbearbeitungsmethoden dar und bietet den unterschiedlichsten Parteien, für welche Slumdaten von Bedeutung sind, ein wichtiges Instrument.
Slums are a pervasive feature of many urban regions in the global South, with India hosting the largest number of the global slum dwellers. Differences in slum definitions across countries and deficiencies of data collection are the cause of a large error margin in establishing slum population numbers and slum locations at a global, national and city scale. The main goal of this thesis is to develop an approach to automated identification of slums using sub-metre resolution satellite imagery, and to apply the new method to the slum-plagued South Indian megacity of Hyderabad. This dissertation establishes a multi-step satellite imagery analysis framework, which is capable of performing rapid identification of slums in Hyderabad without extensive ground surveys or manual image analysis. It is based on the relation of a specific range of spatial heterogeneity expressed through lacunarity to the probability of an area to be morphologically similar to the surface texture of a slum. The application of the proposed method has for the first time produced plausible, spatially coherent and politically unbiased slum coverage and slum population datasets for the whole of Hyderabad. The results expose inconsistencies in slum population data reporting and the slum recognition process currently in place in the city. The analysis of multitemporal remote sensing data indicates a considerable slum population increase in the metropolitan area of Hyderabad and provides an insight into spatiotemporal slum development patterns between the years 2003 and 2010. This dissertation contributes to the body of knowledge on remote sensing of human settlements and advanced image processing techniques and presents an essential instrument to be used by a the United Nations bodies, national and city governments as well as non-governmental organisations engaged in slum-related work.
APA, Harvard, Vancouver, ISO, and other styles
37

Sun, Yao 1962. "Information exchange between medical databases through automated identification of concept equivalence." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/8064.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2002.
"February 2002."
Includes bibliographical references (p. 123-127).
The difficulty of exchanging information between heterogeneous medical databases remains one of the chief obstacles in achieving a unified patient medical record. Although methods have been developed to address differences in data formats, system software, and communication protocols, automated data exchange between disparate systems still remains an elusive goal. The Medical Information Acquisition and Transmission Enabler (MEDIATE) system identifies semantically equivalent concepts between databases to facilitate information exchange. MEDIATE employs a semantic network representation to model underlying native databases and to serve as an interface for database queries. This representation generates a semantic context for data concepts that can subsequently be exploited to perform automated concept matching between disparate databases. To test the feasibility of this system, medical laboratory databases from two different institutions were represented within MEDIATE and automated concept matching was performed. The experimental results show that concepts that existed in both laboratory databases were always correctly recognized as candidate matches.
(cont.) In addition, concepts which existed in only one database could often be matched with more "generalized" concepts in the other database that could still provide useful information. The architecture of MEDIATE offers advantages in system scalability and robustness. Since concept matching is performed automatically, the only work required to enable data exchange is construction of the semantic network representation. No pre-negotiation is required between institutions to identify data that is compatible for exchange, and there is no additional overhead to add more databases to the exchange network. Because the concept matching occurs dynamically at the time of information exchange, the system is robust to modifications in the underlying native databases as long as the semantic network representations are appropriately updated.
by Yao Sun.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
38

Xin, Zhu. "Improvement of Automated Guided Vehicle's image recognition : Object detection and identification." Thesis, Högskolan Väst, Avdelningen för produktionssystem (PS), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-12027.

Full text
Abstract:
Automated Guided Vehicle(AGV) as a kind of material conveying equipment has been widely used in modern manufacturing systems. [1] It carries the goods between the workshop along the designated paths. The ability of localization and recognizing the environment around themselves is the essential technology. AGV navigation is developed from several technologies such as fuzzy theory, neural network and other intelligent technology. Among them, visual navigation is one of the newer navigations, because of its path laying is easy to maintain, can identify variety of road signs. Compared with traditional methods, this approach has a better flexibility and robustness, since it can recognition more than one path branch with high anti-jamming capability. Recognizing the environment from imagery can enhance safety and dependability of an AGV, make it move intelligently and brings broader prospect for it. University West has a Patrolbot which is an AGV robot with basic functions. The task is to enhance the ability of vision analysis, to make it become more practical and flexible. The project is going to add object detection, object recognition and object localization functions on the Patrolbot. This thesis project develops methods based on image recognition, deep learning, machine vision, Convolution Neural Network and related technologies. In this project Patrolbot is a platform to show the result, we can also use this kind of program on any other machines. This report generally describes methods of navigation, image segmentation and object recognition. After analyzing the different methods of image recognition, it is easy to find that Neural Network has more advantages for image recognition, it can reduce the parameters and shorting the training and analyzing time, therefore Convolution Neural Network was introduced detailly. After that, the way to achieve image recognition using convolution neural network was presented and in order to recognize several objects at the same time, an image segmentation was also presented here. On the other hand, to make this image recognition processes to be used widely, the ability of transfer learning becomes important. Therefore, the method of transfer learning is presented to achieve customized requirement.
APA, Harvard, Vancouver, ISO, and other styles
39

Gatsheni, B. N., and F. Aghdasi. "The application of Radio Frequency Identification (RFID) in speeding up the flow of materials in an industrial manufacturing process." Interim : Interdisciplinary Journal, Vol 6, Issue 1: Central University of Technology Free State Bloemfontein, 2007. http://hdl.handle.net/11462/396.

Full text
Abstract:
Published Article
RFID can work in conjunction with sensors in material handling especially on a conveyor belt. A dozen different graded tagged products can be picked up by the RFID system in real-time and transported to respective chutes into automatic guided vehicles (AGV) for transportation to specific storage locations. The development of this system is now at an advanced stage. Our predictions to date show that the application of RFID in material handling in a manufacturing environment can assist in the fast flow of components throughout the assembly line beyond what available systems can do.
APA, Harvard, Vancouver, ISO, and other styles
40

Louw, Lloyd A. B. "Automated face detection and recognition for a login system." Thesis, Link to the online version, 2007. http://hdl.handle.net/10019/438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Dunwoody, Keith. "Automated identification of cutting force coefficients and tool dynamics on CNC machines." Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/23240.

Full text
Abstract:
The complexity and variation of parts are continuously increasing due to technologically oriented consumers. The objective of present manufacturing industry is to increase the quality while decreasing the machining costs. This thesis presents a smart machining strategy which allows the automated prediction of chatter-free cutting conditions using sensors integrated to Computer Numerical Controlled (CNC) machine tools. The prediction of vibration free spindle speeds and depth of cuts require to have material's cutting force coefficient and frequency response function (FRF) of the machine at its tool tip. The cutting force coefficients are estimated from the cutting force measurements collected through dynamometers in laboratory environment. The thesis presents an alternative identification of tangential cutting force coefficient from average spindle power signals which are readily available on machine tools. When tangential, radial and axial cutting force coefficients are needed, the forces need to be collected by piezoelectric sensors embedded to mechanical structures. The structural dynamics of sensor housings distort the force measurements at high spindle speeds. A Kalman filter is designed to compensate the structural modes of the sensor assembly when the spindle speed and its harmonics resonate one of the modes the measuring system. The FRF of the system is measured by a computer controlled impact modal test unit which is integrated to CNC. The impact head is instrumented with a piezo force sensor, and the vibrations are measured with a capacitive displacement sensor. The spring loaded impact head is released by a DC solenoid controlled by the computer. The impact force and resulting tool vibrations are recorded in real time, and the FRF is estimated automatically. The measured FRF and cutting force coefficient estimated from the spindle power are later used to predict the chatter free depth of cuts and spindle speeds. The machine integrated, smart machining system allows the operator to automatically select the chatter-free cutting conditions, leading to improved quality and productivity.
APA, Harvard, Vancouver, ISO, and other styles
42

Dann, Aaron. "Identification and simulation of an automated guided vechile for minimal sensor applications." Thesis, University of Canterbury. Mechanical Engineering, 1996. http://hdl.handle.net/10092/6410.

Full text
Abstract:
The problem of controlling an Automated Guided Vehicles (AGV) with the minimum number of sensors is considered. Sensors add cost and complexity to an AGV both electrically and in terms of increased computational requirements of the controller. Computer simulations are proposed to model the behaviour of the AGV Models of the dynamics of an AGV are proposed and simulated at varying levels of complexity using commercially available numerical software. In order to model the AGV accurately, aspects of the control system and the physical system had to be analysed. Laboratory experiments were designed and performed, and the results were analysed to determine the dynamic properties of sub-systems of the AGV To provide a datum for comparison to the simulations, measurements were made of the performance of an AGV under a variety of control conditions corresponding to the computer models. Comparisons of the simulations and the AGV performance are discussed and suggestions are made for improving the AGV and its control system. The models presented in this thesis demonstrate a good correlation for low performance AGV s in non-rigorous conditions, or well loaded AGV s on good traction surfaces. However they do not accurately represent the AGV at the limits of traction. Two mechanical improvements to the University of Canterbury (UOC) Mk-II AGV are suggested, including the addition of softer compound tyres for use on hard, painted surfaces, and the design of a gear train with lower backlash.
APA, Harvard, Vancouver, ISO, and other styles
43

Bosworth, Charles F. "The identification of primary open angle glaucoma using motion automated perimetry (MAP) /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC IP addresses, 1999. http://wwwlib.umi.com/cr/ucsd/fullcit?p9935476.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ng, Tony M. Eng Massachusetts Institute of Technology. "Automated identification of terminal area air traffic flows and weather related deviations." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/46010.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 47).
Air traffic in terminal air space is very complex, making it very difficult to identify air traffic flows. Finding air traffic flows and flow boundaries are very helpful in analyzing how air traffic would react to weather. This thesis created the Terminal Traffic Flow Identifier algorithm to solve this problem. The algorithm was demonstrated to work in the Atlanta by quickly processing 20,000 sample trajectories and returning accurate flows with tight boundaries. This thesis also created techniques to extract weather features that occur inside the identified flows and demonstrated that training upon these features give good results. The algorithms and software created in this thesis may soon be incorporated into larger traffic managements systems developed at MIT Lincoln Laboratory.
by Tony Ng.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
45

Soman, Sopal. "An Automated Methodology for Identification and Analysis of Erroneous Production Stop Data." Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19126.

Full text
Abstract:
The primary aim of the project is to automate the process of identifying erroneous entries in stop data originating from a given production line. Machines or work stations in a production line may be stopped due to various planned (scheduled maintenance, tool change, etc.) or unplanned (break downs, bottlenecks, etc.) reasons. It is essential to keep track of such stops for diagnosing inefficiencies such as reduced throughput and high cycle time variance. With the increased focus on Industry 4.0, many manufacturing companies have started to digitalize their production processes. Among other benefits, this has enabled production data to be captured in real-time and recorded for further analysis. However, such automation comes with its problems. In the case of production stop data, it has been observed that in addition to planned and unplanned stops, the data collection system may sometimes record erroneous or false stops. There are various known reasons for such erroneous stop data. These include not accounting for the lunch break, national holidays, weekends, communication loss with data collection system, etc. Erroneous stops can also occur due to unknown reasons, in which case they can only be identified through a statistical analysis of stop data distributions across various machines and workstations. This project presents an automated methodology that uses a combination of data filtering, aggregation, and clustering for identifying erroneous stop data with known reasons referred to as known faults. Once the clusters of known faults are identified, they are analyzed using association rule mining to reveal machines or workstations that are simultaneously affected. The ultimate goal of automatically identifying erroneous stop data entries is to obtain better empirical distribution for stop data to be used with simulation models. This aspect, along with the identification of unknown faults is open for future work.
APA, Harvard, Vancouver, ISO, and other styles
46

Yang, Xiaoyun. "Automated identification of optic nerve head and blood vessels in retinal images." Thesis, Ulster University, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.444521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Clarke, Colin. "Development of an automated identification system for nano-crystal encoded microspheres in flow cytometry." Thesis, Cranfield University, 2008. http://dspace.lib.cranfield.ac.uk/handle/1826/4036.

Full text
Abstract:
Quantum dot encoded microspheres (QDEMs) offer much potential for bead based identification of a variety of biomolecules via flow cytometry (FCM). To date, QDEM subpopulation classification from FCM has required significant instrument modification or multiparameter gating. It is unclear whether or not current data analysis approaches can handle the increased multiplexed capacity offered by these novel encoding schemes. In this thesis the drawbacks of currently available data analysis techniques are demonstrated and novel classification methods proposed to overcome these limitations. A commercially available 20 code QDEM library with fluorescent emissions at 4 distinct wavelengths and 4 different intensity levels was analysed using flow cytometry. Multiparameter gating (MPG) a readily available classification method for subpopulations in FCM was evaluated. A support vector machine (SVM) and two types of artificial neural networks (ANNs), a multilayer perceptron (MLP) and probabilistic radial basis function (PRBF) were also considered. For the supervised models rigorous parameter selection using cross validation (CV) was used to construct the optimum models. Independent test set validation was also carried out. As a further test, external validation of the classifiers was performed using multiplexed QDEMs solutions. The performance of MPG was poor (average misclassification (MC) rate = 9.7%) was a time consuming process requiring fine adjustment of the gates, classifications made on the dataset were poor with multiple classifications on single events and as the multiplex capacity increases the performance is likely to decrease. The SVM had the best performance in independent test validation with 96.33% accuracy on the independent testing (MLP = 96.12%, PRBF = 94.38%). Furthermore the performance of the SVM was superior to both MPG and both ANNs for the external validation set with an average MC rate for MLP = 6.1% and PRBF = 7.5% whereas the SVM MC rate was 2.9%. Assuming that the external test solutions were homogenous the variance between classified results should be minimal hence, the variance of correct classifications (CCs) was used as an additional indicator of classifier performance. The SVM demonstrates the lowest variance for each of the external validation solutions (average σ 2 = 31479) some 50% lower than that of MPG. As a conclusion to the development of the classifier, a user friendly software system has been developed to allow construction and evaluation of multiclass SVMs for use by FCM practitioners in the laboratory. SVMs are a promising classifier for QDEMs that can be rapidly trained and classifications made in real time using standard FCM instrumentation. It is hoped that this work will advance SAT for bioanalytical applications.
APA, Harvard, Vancouver, ISO, and other styles
48

Dubrowski, Piotr. "An automated multicolour fluorescence in situ hybridization workstation for the identification of clonally related cells." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/733.

Full text
Abstract:
The methods presented in this study are aimed at the identification of subpopulations (clones) of genetically similar cells within tissue samples through measurement of loci-specific Fluorescence in-situ hybridization (FISH) spot signals for each nucleus and analyzing cell spatial distributions by way of Voronoi tessellation and Delaunay triangulation to robustly define cell neighbourhoods. The motivation for the system is to examine lung cancer patient for subpopulations of Non-Small Cell Lung Cancer (NSCLC) cells with biologically meaningful gene copy-number profiles: patterns of genetic alterations statistically associated with resistance to cis-platinum/vinorelbine doublet chemotherapy treatment. Current technologies for gene-copy number profiling rely on large amount of cellular material, which is not always available and suffers from limited sensitivity to only the most dominant clone in often heterogeneous samples. Thus, through the use of FISH, the detection of gene copy-numbers is possible in unprocessed tissues, allowing identification of specific tumour clones with biologically relevant patterns of genetic aberrations. The tissue-wide characterization of multiplexed loci-specific FISH signals, described herein, is achieved through a fully automated, multicolour fluorescence imaging microscope and object segmentation algorithms to identify cell nuclei and FISH spots within. Related tumour clones are identified through analysis of robustly defined cell neighbourhoods and cell-to-cell connections for regions of cells with homogenous and highly interconnected FISH spot signal characteristics. This study presents experiments which demonstrate the system’s ability to accurately quantify FISH spot signals in various tumour tissues and in up to 5 colours simultaneously or more through multiple rounds of FISH staining. Furthermore, the system’s FISH-based cell classification performance is evaluated at a sensitivity of 84% and specificity 81% and clonal identification algorithm results are determined to be comparable to clone delineation by a human-observer. Additionally, guidelines and procedures to perform anticipated, routine analysis experiments are established.
APA, Harvard, Vancouver, ISO, and other styles
49

Roula, M. A. "Machine vision and texture analysis for the automated identification of tissue pattern in prostatic neoplasia." Thesis, Queen's University Belfast, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.403148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Henderson, Caleb Aleksandr. "Identification of Disease Stress in Turfgrass Canopies Using Thermal Imagery and Automated Aerial Image Analysis." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/103621.

Full text
Abstract:
Remote sensing techniques are important for detecting disease within the turfgrass canopy. Herein, we look at two such techniques to assess their viability in detecting and isolating turfgrass diseases. First, thermal imagery is used to detect differences in canopy temperature associated with the onset of brown patch infection in tall fescue. Sixty-four newly seeded stands of tall fescue were arranged in a randomized block design with two runs with eight blocks each containing four inoculum concentrations within a greenhouse. Daily measurements were taken of the canopy and ambient temperature with a thermal camera. After five consecutive days differences were detected in canopy – ambient temperature in both runs (p=0.0015), which continued for the remainder of the experiment. Moreover, analysis of true colour imagery during this time yielded no significant differences between groups. A field study comparing canopy temperature of adjacent symptomatic and asymptomatic tall fescue and creeping bentgrass canopies showed differences as well (p<0.0492). The second project attempted to isolate spring dead spot from aerial imagery of bermudagrass golf course fairways using a Python script. Aerial images from unmanned aerial vehicle flights were collected from four fairways at Nicklaus Course of Bay Creek Resort in Cape Charles, VA. Accuracy of the code was measured by creating buffer zones around code generated points and measuring how many disease centers measured by hand were eclipsed. Accuracies measured as high as 97% while reducing coverage of the fairway by over 30% compared to broadcast applications. Point density maps of the hand and code points also appeared similar. These data provide evidence for new opportunities in remote turfgrass disease detection.
Master of Science in Life Sciences
Turfgrasses are ubiquitous, from home lawns to sports fields, where they are used for their durability and aesthetics. Disease within the turfgrass canopy can ruin these aspects of the turfgrass reducing its overall quality. This makes detection and management of disease within the canopy an important part of maintaining turfgrass. Here we look at the effectiveness of imaging techniques in detecting and isolating disease within cool-season and warm-season turfgrasses. We test the capacity for thermal imagery to detect the infection of tall fescue (Festuca arundenacea) with Rhizoctonia solani, the causal agent of brown patch. In greenhouse experiments, differences were detected in normalized canopy temperature between differing inoculation levels at five days post inoculation, and in field conditions we were able to observe differences in canopy temperature between adjacent symptomatic and non-symptomatic stands. We also developed a Python script to automatically identify and record the location of spring dead spot damage within mosaicked images of bermudagrass golf fairways captured via unmanned aerial vehicle. The developed script primarily used Hough transform to mark the circular patches within the fairway and recorded the GPS coordinates of each disease center. When compared to disease incidence maps created manually the script was able to achieve accuracies as high as 97% while reducing coverage of the fairway by over 30% compared to broadcast applications. Point density maps created from points in the code appeared to match those created manually. Both findings have the potential to be used as tools to help turfgrass managers.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography