To see the other types of publications on this topic, follow the link: Image scoring.

Dissertations / Theses on the topic 'Image scoring'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 21 dissertations / theses for your research on the topic 'Image scoring.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Seal, Amy. "Scoring sentences developmentally : an analog of developmental sentence scoring /." Diss., CLICK HERE for online access:, 2001. http://contentdm.lib.byu.edu/ETD/image/etd12.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Berger, Ulrich, and Ansgar Grüne. "Evolutionary Stability of Indirect Reciprocity by Image Scoring." WU Vienna University of Economics and Business, 2014. http://epub.wu.ac.at/4087/1/wp168.pdf.

Full text
Abstract:
Indirect reciprocity describes a class of reputation-based mechanisms which may explain the prevalence of cooperation in groups where partners meet only once. The first model for which this has analytically been shown was the binary image scoring mechanism, where one's reputation is only based on one's last action. But this mechanism is known to fail if errors in implementation occur. It has thus been claimed that for indirect reciprocity to stabilize cooperation, reputation assessments must be of higher order, i.e. contingent not only on past actions, but also on the reputations of the targets of these actions. We show here that this need not be the case. A simple image scoring mechanism where more than just one past action is observed provides ample possibilities for stable cooperation to emerge even under substantial rates of implementation errors. (authors' abstract)<br>Series: Department of Economics Working Paper Series
APA, Harvard, Vancouver, ISO, and other styles
3

Judson, Carrie Ann. "Accuracy of Automated Developmental Sentence Scoring Software." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1448.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Conser, Erik Timothy. "Improved Scoring Models for Semantic Image Retrieval Using Scene Graphs." PDXScholar, 2017. https://pdxscholar.library.pdx.edu/open_access_etds/3879.

Full text
Abstract:
Image retrieval via a structured query is explored in Johnson, et al. [7]. The query is structured as a scene graph and a graphical model is generated from the scene graph's object, attribute, and relationship structure. Inference is performed on the graphical model with candidate images and the energy results are used to rank the best matches. In [7], scene graph objects that are not in the set of recognized objects are not represented in the graphical model. This work proposes and tests two approaches for modeling the unrecognized objects in order to leverage the attribute and relationship models to improve image retrieval performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Millett, Ronald P. "Automatic holistic scoring of ESL essays using linguistic maturity attributes /." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1507.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

SEIBERT, BRENT BENJAMIN. "EFFECTS OF SUB-PART SCORING IN AUTOMATIC TARGET RECOGNITION." University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1006203207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Contreras, Anthony D. "Historical GeoCollaboration : the implementation of a scoring system to account for uncertainty in Geographic data created in a collaborative environment /." Diss., CLICK HERE for online access, 2010. http://contentdm.lib.byu.edu/ETD/image/etd3555.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sarris, Ippokratis. "Creation of a new fetal biometry image quality scoring tool to improve the accuracy of fetal biometric measurements." Thesis, University of Oxford, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.595667.

Full text
Abstract:
The hypothesis of this work is that through establishing the background variation of ultrasonographic fetal biometry measurements and elucidating the parameters that influence these measurements, a new Fetal Ultrasound Biometry Quality (FUB-Q) image-scoring tool can be created which will be reproducible and able to quantify the accuracy of fetal measurements. Six studies are included, each answering a specific research question. The aim of the first study was to ascertain whether pre-existing image quality scoring methods reflect measurement accuracy and reproducibility. It demonstrated that during the course of an exercise where there was demonstrable improvement in the consistency of measurements performed - by a group of sonographers, this was not mirrored by the pre-existing image scoring system. The aim of the second study was to establish the intra- and inter- observer variability of fetal biometry measurements throughout pregnancy by expert sonographers. This study demonstrated that ultrasound variability of fetal biometry increases with advancing gestation when expressed in measurement values, but is constant as a percentage of the fetal dimensions or when reported as a z score. Calliper placement was the major component of the overall variability. The values from this study served as the background variability, "reference standard", for the FUB-Q tool. The third study had two aims. The first was to establish how 3D scanning performs compared to conventional, real-time, 20. The second aim was to assess whether off-line 3D volume manipulation can be used as a tool to substitute real-time 20 ultrasound for the subsequent studies. It demonstrated that measurements using 3D volume acquisitions exhibit good agreement with real-time 20 scanning, with no systematic error but with a higher random error. However, it also demonstrated that 3D scanning is slower to perform and, similar to real-time 2D, it is not always possible to acquire a 3D volume from a desired orientation. Furthermore, not all 3D volume acquisitions were amenable to reconstruction. However, this study showed that saved 3D volumes can be used as a mean to store large volumes of data for later detailed analysis. The aim of the fourth study was to create the FUB-Q scoring tool. This was done by establishing the difference in measurement resulting from optimal and different forms of suboptimal images in a systematic fashion. For any . t " given image, and its derived measurement, the observer inserts in the model the various image scoring point parameters. The model then gives a prediction about the confidence interval within which the optimal, "gold standard", measurement should be. The aim of the fifth study was to validate on an independent test set the predictive ability of the newly developed FUB-Q scoring tool. It demonstrated that the FUB-Q tool can correctly predict the confidence interval within which measurements recorded from correctly acquired images should be in relation to measurements acquired from incorrectly acquired ones. The aim of the sixth, and final , study was to evaluate the reproducibility of obtaining the relevant scores for the FUB-Q tool. It demonstrated that the FUB-Q tool has good intra- and inter- observer reproducibility and is a reliable system for assessing the quality of fetal biometry based on ultrasound images. In conclusion, the FUB-Q tool could be a useful system used for audit of clinical practice and quality control as well as for training purposes .
APA, Harvard, Vancouver, ISO, and other styles
9

Ghaedi, Leila. "AN AUTOMATED DENTAL CARIES DETECTION AND SCORING SYSTEM FOR OPTIC IMAGES OF TOOTH OCCLUSAL SURFACE." VCU Scholars Compass, 2014. http://scholarscompass.vcu.edu/etd/3548.

Full text
Abstract:
Dental caries are one of the most prevalent chronic diseases. Worldwide 60 to 90 percent of school children and nearly 100 percent of adults experienced dental caries. The management of dental caries demands detection of carious lesions at early stages. The research of designing diagnostic tools in caries has been at peak for the last decade. This research aims to design an automated system to detect and score dental caries according to the International Caries Detection and Assessment System (ICDAS) guidelines using the optical images of the occlusal tooth surface. There have been numerous works that address the problem of caries detection by using new imaging technologies or advanced measurements. However, no such study has been done to detect and score caries with the use of optical images of the tooth surface. The aim of this dissertation is to develop image processing and machine learning algorithms to address the problem of detection and scoring the caries by the use of optical image of the tooth surface.
APA, Harvard, Vancouver, ISO, and other styles
10

Forbes, Jessica LeeAnn. "Development and verification of medical image analysis tools within the 3D slicer environment." Thesis, University of Iowa, 2016. https://ir.uiowa.edu/etd/3085.

Full text
Abstract:
Rapid development of domain specialized medical imaging tools is essential for deploying medical imaging technologies to advance clinical research and clinical practice. This work describes the development process, deployment method, and evaluation of modules constructed within the 3D Slicer environment. These tools address critical problems encountered in four different clinical domains: quality control review of large repositories of medical images, rule-based automated label map cleaning, quantification of calcification in the heart using low-dose radiation scanning, and waist circumference measurement from abdominal scans. Each of these modules enables and accelerates clinical research by incorporating medical imaging technologies that minimize manual human effort. They are distributed within the multi-platform 3D Slicer Extension Manager environment for use in the computational environment most convenient to the clinician scientist.
APA, Harvard, Vancouver, ISO, and other styles
11

Ali, Afiya. "Recognition of facial affect in individuals scoring high and low in psychopathic personality characteristics." The University of Waikato, 2007. http://adt.waikato.ac.nz/public/adt-uow20070129.190938/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Munnecom, Lorenna, and Miguel Chaves de Lemos Pacheco. "Exploration of an Automated Motivation Letter Scoring System to Emulate Human Judgement." Thesis, Högskolan Dalarna, Mikrodataanalys, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:du-34563.

Full text
Abstract:
As the popularity of the master’s in data science at Dalarna University increases, so does the number of applicants. The aim of this thesis was to explore different approaches to provide an automated motivation letter scoring system which could emulate the human judgement and automate the process of candidate selection. Several steps such as image processing and text processing were required to enable the authors to retrieve numerous features which could lead to the identification of the factors graded by the program managers. Grammatical based features and Advanced textual features were extracted from the motivation letters followed by the application of Topic Modelling methods to extract the probability of each topics occurring within a motivation letter. Furthermore, correlation analysis was applied to quantify the association between the features and the different factors graded by the program managers, followed by Ordinal Logistic Regression and Random Forest to build models with the most impactful variables. Finally, Naïve Bayes Algorithm, Random Forest and Support Vector Machine were used, first for classification and then for prediction purposes. These results were not promising as the factors were not accurately identified. Nevertheless, the authors suspected that the factors may be strongly related to the highlight of specific topics within a motivation letter which can lead to further research.
APA, Harvard, Vancouver, ISO, and other styles
13

Khorshed, Reema A. A. "A cell level automated approach for quantifying antibody staining in immunohistochemistry images : a structural approach for quantifying antibody staining in colonic cancer spheroid images by integrating image processing and machine learning towards the implementation of computer aided scoring of cancer markers." Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/5763.

Full text
Abstract:
Immunohistological (IHC) stained images occupy a fundamental role in the pathologist's diagnosis and monitoring of cancer development. The manual process of monitoring such images is a subjective, time consuming process that typically relies on the visual ability and experience level of the pathologist. A novel and comprehensive system for the automated quantification of antibody inside stained cell nuclei in immunohistochemistry images is proposed and demonstrated in this research. The system is based on a cellular level approach, where each nucleus is individually analyzed to observe the effects of protein antibodies inside the nuclei. The system provides three main quantitative descriptions of stained nuclei. The first quantitative measurement automatically generates the total number of cell nuclei in an image. The second measure classifies the positive and negative stained nuclei based on the nuclei colour, morphological and textural features. Such features are extracted directly from each nucleus to provide discriminative characteristics of different stained nuclei. The output generated from the first and second quantitative measures are used collectively to calculate the percentage of positive nuclei (PS). The third measure proposes a novel automated method for determining the staining intensity level of positive nuclei or what is known as the intensity score (IS). The minor intensity features are observed and used to classify low, intermediate and high stained positive nuclei. Statistical methods were applied throughout the research to validate the system results against the ground truth pathology data. Experimental results demonstrate the effectiveness of the proposed approach and provide high accuracy when compared to the ground truth pathology data.
APA, Harvard, Vancouver, ISO, and other styles
14

Berger, Ulrich, and Ansgar Grüne. "On the stability of cooperation under indirect reciprocity with first-order information." Elsevier, 2016. http://epub.wu.ac.at/5067/1/2016_GEB.pdf.

Full text
Abstract:
Indirect reciprocity describes a class of reputation-based mechanisms which may explain the prevalence of cooperation in large groups where partners meet only once. The first model for which this has been demonstrated was the image scoring mechanism. But analytical work on the simplest possible case, the binary scoring model, has shown that even small errors in implementation destabilize any cooperative regime. It has thus been claimed that for indirect reciprocity to stabilize cooperation, assessments of reputation must be based on higher-order information. Is indirect reciprocity relying on frst-order information doomed to fail? We use a simple analytical model of image scoring to show that this need not be the case. Indeed, in the general image scoring model the introduction of implementation errors has just the opposite effect as in the binary scoring model: it may stabilize instead of destabilize cooperation.
APA, Harvard, Vancouver, ISO, and other styles
15

Rieger, James L. "Accuracies of Bomb-Scoring Systems Based on Digitized 2- and 3-D TV Images." International Foundation for Telemetering, 1990. http://hdl.handle.net/10150/613446.

Full text
Abstract:
International Telemetering Conference Proceedings / October 29-November 02, 1990 / Riviera Hotel and Convention Center, Las Vegas, Nevada<br>Three-dimensional images produced by film or analog television have been used for bomb scoring by triangulation for many years. Use of solid-state imaging devices and digitization of analog camera outputs can improve the accuracy of such measurements, or make accuracy lower or (worst of all) of random accuracy if interpreted incorrectly. This paper examines some of the issues involved, and tabulates the maximum accuracies available for a given system.
APA, Harvard, Vancouver, ISO, and other styles
16

Oldham, Kevin M. "Table tennis event detection and classification." Thesis, Loughborough University, 2015. https://dspace.lboro.ac.uk/2134/19626.

Full text
Abstract:
It is well understood that multiple video cameras and computer vision (CV) technology can be used in sport for match officiating, statistics and player performance analysis. A review of the literature reveals a number of existing solutions, both commercial and theoretical, within this domain. However, these solutions are expensive and often complex in their installation. The hypothesis for this research states that by considering only changes in ball motion, automatic event classification is achievable with low-cost monocular video recording devices, without the need for 3-dimensional (3D) positional ball data and representation. The focus of this research is a rigorous empirical study of low cost single consumer-grade video camera solutions applied to table tennis, confirming that monocular CV based detected ball location data contains sufficient information to enable key match-play events to be recognised and measured. In total a library of 276 event-based video sequences, using a range of recording hardware, were produced for this research. The research has four key considerations: i) an investigation into an effective recording environment with minimum configuration and calibration, ii) the selection and optimisation of a CV algorithm to detect the ball from the resulting single source video data, iii) validation of the accuracy of the 2-dimensional (2D) CV data for motion change detection, and iv) the data requirements and processing techniques necessary to automatically detect changes in ball motion and match those to match-play events. Throughout the thesis, table tennis has been chosen as the example sport for observational and experimental analysis since it offers a number of specific CV challenges due to the relatively high ball speed (in excess of 100kph) and small ball size (40mm in diameter). Furthermore, the inherent rules of table tennis show potential for a monocular based event classification vision system. As the initial stage, a proposed optimum location and configuration of the single camera is defined. Next, the selection of a CV algorithm is critical in obtaining usable ball motion data. It is shown in this research that segmentation processes vary in their ball detection capabilities and location out-puts, which ultimately affects the ability of automated event detection and decision making solutions. Therefore, a comparison of CV algorithms is necessary to establish confidence in the accuracy of the derived location of the ball. As part of the research, a CV software environment has been developed to allow robust, repeatable and direct comparisons between different CV algorithms. An event based method of evaluating the success of a CV algorithm is proposed. Comparison of CV algorithms is made against the novel Efficacy Metric Set (EMS), producing a measurable Relative Efficacy Index (REI). Within the context of this low cost, single camera ball trajectory and event investigation, experimental results provided show that the Horn-Schunck Optical Flow algorithm, with a REI of 163.5 is the most successful method when compared to a discrete selection of CV detection and extraction techniques gathered from the literature review. Furthermore, evidence based data from the REI also suggests switching to the Canny edge detector (a REI of 186.4) for segmentation of the ball when in close proximity to the net. In addition to and in support of the data generated from the CV software environment, a novel method is presented for producing simultaneous data from 3D marker based recordings, reduced to 2D and compared directly to the CV output to establish comparative time-resolved data for the ball location. It is proposed here that a continuous scale factor, based on the known dimensions of the ball, is incorporated at every frame. Using this method, comparison results show a mean accuracy of 3.01mm when applied to a selection of nineteen video sequences and events. This tolerance is within 10% of the diameter of the ball and accountable by the limits of image resolution. Further experimental results demonstrate the ability to identify a number of match-play events from a monocular image sequence using a combination of the suggested optimum algorithm and ball motion analysis methods. The results show a promising application of 2D based CV processing to match-play event classification with an overall success rate of 95.9%. The majority of failures occur when the ball, during returns and services, is partially occluded by either the player or racket, due to the inherent problem of using a monocular recording device. Finally, the thesis proposes further research and extensions for developing and implementing monocular based CV processing of motion based event analysis and classification in a wider range of applications.
APA, Harvard, Vancouver, ISO, and other styles
17

Casero, Cañas Ramón. "Left ventricle functional analysis in 2D+t contrast echocardiography within an atlas-based deformable template model framework." Thesis, University of Oxford, 2008. http://ora.ox.ac.uk/objects/uuid:b17b3670-551d-4549-8f10-d977295c1857.

Full text
Abstract:
This biomedical engineering thesis explores the opportunities and challenges of 2D+t contrast echocardiography for left ventricle functional analysis, both clinically and within a computer vision atlas-based deformable template model framework. A database was created for the experiments in this thesis, with 21 studies of contrast Dobutamine Stress Echo, in all 4 principal planes. The database includes clinical variables, human expert hand-traced myocardial contours and visual scoring. First the problem is studied from a clinical perspective. Quantification of endocardial global and local function using standard measures shows expected values and agreement with human expert visual scoring, but the results are less reliable for myocardial thickening. Next, the problem of segmenting the endocardium with a computer is posed in a standard landmark and atlas-based deformable template model framework. The underlying assumption is that these models can emulate human experts in terms of integrating previous knowledge about the anatomy and physiology with three sources of information from the image: texture, geometry and kinetics. Probabilistic atlases of contrast echocardiography are computed, while noting from histograms at selected anatomical locations that modelling texture with just mean intensity values may be too naive. Intensity analysis together with the clinical results above suggest that lack of external boundary definition may preclude this imaging technique for appropriate measuring of myocardial thickening, while endocardial boundary definition is appropriate for evaluation of wall motion. Geometry is presented in a Principal Component Analysis (PCA) context, highlighting issues about Gaussianity, the correlation and covariance matrices with respect to physiology, and analysing different measures of dimensionality. A popular extension of deformable models ---Active Appearance Models (AAMs)--- is then studied in depth. Contrary to common wisdom, it is contended that using a PCA texture space instead of a fixed atlas is detrimental to segmentation, and that PCA models are not convenient for texture modelling. To integrate kinetics, a novel spatio-temporal model of cardiac contours is proposed. The new explicit model does not require frame interpolation, and it is compared to previous implicit models in terms of approximation error when the shape vector changes from frame to frame or remains constant throughout the cardiac cycle. Finally, the 2D+t atlas-based deformable model segmentation problem is formulated and solved with a gradient descent approach. Experiments using the similarity transformation suggest that segmentation of the whole cardiac volume outperforms segmentation of individual frames. A relatively new approach ---the inverse compositional algorithm--- is shown to decrease running times of the classic Lucas-Kanade algorithm by a factor of 20 to 25, to values that are within real-time processing reach.
APA, Harvard, Vancouver, ISO, and other styles
18

Elvas, Luís Manuel Nobre de Brito. "Calcium identification and scoring based on echocardiography imaging." Master's thesis, 2021. http://hdl.handle.net/10071/22901.

Full text
Abstract:
Currently, an echocardiography expert is needed to identify calcium in the aortic valve, and a cardiac CT-Scan image is needed for calcium quantification. When performing a CT-scan, the patient is subject to radiation, and therefore the number of CT-scans that can be performed should be limited, restricting the patient's monitoring. Computer Vision (CV) has opened new opportunities for improved efficiency when extracting knowledge from an image. Applying CV techniques on echocardiography imaging may reduce the medical workload for identifying the calcium and quantifying it, helping doctors to maintain a better tracking of their patients. In our approach, we developed a simple technique to identify and extract the calcium pixel count from echocardiography imaging, by using CV. Based on anonymized real patient echocardiographic images, this approach enables semi-automatic calcium identification. As the brightness of echocardiography images (with the highest intensity corresponding to calcium) vary depending on the acquisition settings, we performed echocardiographic adaptive image binarization. Given that blood maintains the same intensity on echocardiographic images – being always the darker region – we used blood structures in the image to create an adaptive threshold for binarization. After binarization, the region of interest (ROI) with calcium, was interactively selected by an echocardiography expert and extracted, allowing us to compute a calcium pixel count, corresponding to the spatial amount of calcium. The results obtained from our experiments are encouraging. With our technique, from echocardiographic images collected for the same patient with different acquisition settings and different brightness, we were able to obtain a calcium pixel count, where pixels values show an absolute pixel value margin of error of 3 (on a scale from 0 to 255), that correlated well with human expert assessment of calcium area for the same images.<br>Atualmente, é necessário um perito em ecocardiografia para identificar o cálcio na válvula aórtica, e é necessária uma imagem Tomográfica Computorizada (TAC) cardíaca para a quantificação do cálcio. Ao realizar uma TAC, o paciente é sujeito a radiação, pelo que o número de TACs que podem ser realizadas deve ser limitado, restringindo a monitorização do paciente. A Visão por Computador (VC) abriu novas oportunidades para uma maior eficiência na extração de conhecimentos de uma imagem. A aplicação de técnicas de VC na ecocardiografia pode reduzir a carga de trabalho médico para identificar o cálcio e quantificálo, ajudando os médicos a manter um melhor acompanhamento dos seus pacientes. Na nossa abordagem, desenvolvemos uma técnica simples para identificar e extrair o número de pixéis de cálcio da ecocardiografia, através da utilização de VC. Com base em ecocardiografias anónimas de doentes reais, esta abordagem permite a identificação semiautomática do cálcio. Como o brilho das imagens de ecocardiografia (com a intensidade mais elevada corresponde ao cálcio) varia consoante os parâmetros de aquisição, realizámos a binarização das ecocardiografias de forma adaptativa. Dado que o sangue mantém a mesma intensidade nas ecocardiografias - sendo sempre a região mais escura - utilizámos estruturas sanguíneas na imagem para criar um limiar adaptativo para a binarização. Após a binarização, a região de interesse (ROI) com cálcio, foi selecionada interactivamente por um especialista em ecocardiografia e extraída, permitindo-nos calcular o número de pixéis de cálcio, correspondente à quantidade espacial de cálcio. Os resultados obtidos com as nossas experiências são encorajadores. Com a nossa técnica, a partir de ecocardiografias recolhidas para o mesmo paciente com diferentes configurações de aquisição e diferentes brilhos, conseguimos obter uma contagem de pixéis de cálcio, onde os valores de pixéis mostram uma margem de erro absoluta de 3 (numa escala de 0 a 255), que se correlacionou bem com a avaliação humana perita da área de cálcio para as mesmas imagens.
APA, Harvard, Vancouver, ISO, and other styles
19

Chen, Ching-Yi, and 陳靜怡. "Application of Image Processing Techniques and Neural Networks to Automatic Micronuclei Scoring." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/53803948643282483827.

Full text
Abstract:
碩士<br>元智大學<br>資訊管理研究所<br>93<br>In recent decades, demand for applying image-processing techniques to help biomedical diagnosis has increased rapidly. The traditional system processes, take Micronuclei (MN) scoring for example, use human eyes with microscopic which not only take too much time and human resources, but with unstable results for recognition. In addition, the results are highly dependant on the expertise of the researcher who conducted the experiment. As a result, an objective tool to digitize the process and automatic recognition will eliminate the subjective judgment from the researcher. Besides, the digitized slides and results can also be retrieved and verified anytime in the future conveniently. In this study, we propose to integrate image processing techniques and Neural Networks to the automatic counting of cells. First, we digitize images on the slide automatically and store them to the computer. Then we convert the original images into gray scales and use Otsu algorithm to find the optimal threshold for image segmentation. In some cases, it is also required to apply morphological techniques to eliminate noises and preserve the important features. We then derive the features of shape from the information of the object contours and the internal structure obtained by referencing to the original image. Finally, we take these features as input to the Bayesian Network for the recognition of the Micronuclei. To sum up, the steps are as follows: 1) slide digitization, 2) color space transformation, 3) image segmentation, 4) initially counting of the cells, 5) feature retrieval, 6) optimal features finding, and 7) automatic Micronuclei recognition. In addition, an image restoration program is also developed which makes it more convenient for researchers to find the image files afterwards for further confirming and facilitates the browsing of nearby images.
APA, Harvard, Vancouver, ISO, and other styles
20

Lin, Yuan-Chi, and 林源琦. "An Automatic Scoring System for Air Pistol Shooting Competition Based on Image Processing Techniques." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/71037248758690379870.

Full text
Abstract:
碩士<br>中原大學<br>電子工程研究所<br>104<br>Traditionally, the shooting scores for air pistol competition are processed manually. These come with a lot of issues such as the lengthy processing time, great demand on manpower and even security concern. Currently, automatic score recording technology has been developed, but the resulting products must be imported and they are usually expensive. For the sake of gaining popularity of shooting sports, developing an automatic score recording system becomes a meaningful research work. This study investigates how to apply image processing techniques to automatically determine the score and recognize the series number on a target sheet. The score is determined in two major steps, namely extracting a bullet hole and calculating the distance between the bullet hole and the center of the target sheet. In bullet hole extraction, this work uses RGB color information to detect a specific color and identify the bullet hole. Then the center of the bullet hole is found by using the Hough circle detection method. The distance between the bullet hole center and the center of the target sheet is used to calculate the score. The serial number recognition tasks are divided into three parts, namely, positioning, segmentation and identification. Positioning means to find the exact location of the series numbers at the four possible places according to the numbers of black pixels counted. Contour detection is used to segment the number characters and the characters are recognized based on template matching. A total of 50 target sheet samples are tested in this experiment, which means we have 50 serial numbers and 300 characters under test. The 99.3% success rate is achieved on character recognition and the success rate for serial number identification is 96%. In the case of overlapping holes, using the Hough circle detection method can still detect a bullet hole. However, if the hole edges are broken and uneven, the detection becomes difficult when the holes are overlapped more than 75%. In this case, the two bullet holes are too close and they will be treated as just one single hole.
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Hsien-Te, and 王獻德. "The Effect of Chromatic-T-Temple-Scoring on identifying the number of persons in an image." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/01802281257310994034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography