To see the other types of publications on this topic, follow the link: Digital image acquisition.

Dissertations / Theses on the topic 'Digital image acquisition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 33 dissertations / theses for your research on the topic 'Digital image acquisition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Joo, Youngjoong. "High speed image acquisition system for focal-plane-arrays." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/14455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schöberl, Michael [Verfasser]. "Modeling of Image Acquisition for Improving Digital Camera Systems / Michael Schöberl." München : Verlag Dr. Hut, 2013. http://d-nb.info/1042308667/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Thongkamwitoon, Thirapiroon. "Digital forensic techniques for the reverse engineering of image acquisition chains." Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/33231.

Full text
Abstract:
In recent years a number of new methods have been developed to detect image forgery. Most forensic techniques use footprints left on images to predict the history of the images. The images, however, sometimes could have gone through a series of processing and modification through their lifetime. It is therefore difficult to detect image tampering as the footprints could be distorted or removed over a complex chain of operations. In this research we propose digital forensic techniques that allow us to reverse engineer and determine history of images that have gone through chains of image acquisition and reproduction. This thesis presents two different approaches to address the problem. In the first part we propose a novel theoretical framework for the reverse engineering of signal acquisition chains. Based on a simplified chain model, we describe how signals have gone in the chains at different stages using the theory of sampling signals with finite rate of innovation. Under particular conditions, our technique allows to detect whether a given signal has been reacquired through the chain. It also makes possible to predict corresponding important parameters of the chain using acquisition-reconstruction artefacts left on the signal. The second part of the thesis presents our new algorithm for image recapture detection based on edge blurriness. Two overcomplete dictionaries are trained using the K-SVD approach to learn distinctive blurring patterns from sets of single captured and recaptured images. An SVM classifier is then built using dictionary approximation errors and the mean edge spread width from the training images. The algorithm, which requires no user intervention, was tested on a database that included more than 2500 high quality recaptured images. Our results show that our method achieves a performance rate that exceeds 99% for recaptured images and 94% for single captured images.
APA, Harvard, Vancouver, ISO, and other styles
4

Toker, Emre 1960. "A prototype charge-coupled device based image acquisition system for digital mammography." Thesis, The University of Arizona, 1990. http://hdl.handle.net/10150/277308.

Full text
Abstract:
A Charge-Coupled Device (CCD) based electronic imaging system is proposed to overcome limitations of conventional film/screen mammography systems at no additional risk or discomfort to the patient. This thesis presents the design, implementation and evaluation of a number of prototype systems incorporating the latest advances in x-ray intensifying phosphor screen, fiber optic reducer, and CCD technologies. System design is based on an x-ray intensifying screen optically coupled to a high resolution, cooled, scientific CCD through a fiber optic reducer. The performance of the prototype system is compared to theoretical predictions, to the ideal x-ray detector, and to conventional film/screen detectors. Images of breast phantoms captured by the prototype CCD-based system and by conventional mammography systems are presented. Experimental results indicate that the CCD-based system can provide "film quality" images within seconds of x-ray exposure in needle localization, fine-needle aspiration biopsy, and magnification procedures in mammography.
APA, Harvard, Vancouver, ISO, and other styles
5

Ewing, Gary John. "Studies on the salient properties of digital imagery that impact on human target acquisition and the implications for image measures." Title page, contents and abstract only, 1999. http://hdl.handle.net/2440/37919.

Full text
Abstract:
Electronically displayed images are becoming increasingly important as an interface between man and information systems. Lengthy periods of intense observation are no longer unusual. There is a growing awareness that specific demands should be made on displayed images in order to achieve an optimum match with the perceptual properties of the human visual system. These demands may vary greatly, depending on the task for which the displayed image is to be used and the ambient conditions. Optimal image specifications are clearly not the same for a home TV, a radar signal monitor or an infrared targeting image display. There is, therefore, a growing need for means of objective measurement of image quality, where "image quality" is used in a very broad sense and is defined in the thesis, but includes any impact of image properties on human performance in relation to specified visual tasks. The aim of this thesis is to consolidate and comment on the image measure literatures, and to find through experiment the salient properties of electronically displayed real world complex imagery that impacts on human performance. These experiments were carried out for well specified visual tasks (of real relevance), and the appropriate application of image measures to this imagery, to predict human performance, was considered. An introduction to certain aspects of image quality measures is given, and clutter metrics are integrated into this concept. A very brief and basic introduction to the human visual system (HVS) is given, with some basic models. The literature on image measures is analysed, with a resulting classification of image measures, according to which features they were attempting to quantify. A series of experiments were performed to evaluate the effects of image properties on human performance, using appropriate measures of performance. The concept of image similarity was explored, by objectively measuring the subjective perception of imagery of the same scene, as obtained through different sensors, and which underwent different luminance transformations. Controlled degradations were introduced, by using image compression. Both still and video compression were used to investigate both spatial and temporal aspects of HVS processing. The effects of various compression schemes on human target acquisition performance were quantified. A study was carried out to determine the "local" extent, to which the clutter around a target, affects its detectability. It was found in this case, that the excepted wisdom, of setting the local domain (support of the metric) to twice the expected target size, was incorrect. The local extent of clutter was found to be much greater, with this having implications for the application of clutter metrics. An image quality metric called the gradient energy measure (GEM), for quantifying the affect of filtering on Nuclear Medicine derived images, was developed and evaluated. This proved to be a reliable measure of image smoothing and noise level, which in preliminary studies agreed with human perception. The final study discussed in this thesis determined the performance of human image analysts, in terms of their receiver-operating characteristic, when using Synthetic Aperture Radar (SAR) derived images in the surveillance context. In particular, the effects of target contrast and background clutter on human analyst target detection performance were quantified. In the final chapter, suggestions to extend the work of this thesis are made, and in this context a system to predict human visual performance, based on input imagery, is proposed. This system intelligently uses image metrics based on the particular visual task and human expectations and human visual system performance parameters.
Thesis (Ph.D.)--Medical School; School of Computer Science, 1999.
APA, Harvard, Vancouver, ISO, and other styles
6

Yiu, Yat-shun, and 姚溢訊. "Improving the contrast resolution of synthetic aperture imaging: motion artifact reduction based oninterleaved data acquisition." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B4327867X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ramírez, Orozco Raissel. "High dynamic range content acquisition from multiple exposures." Doctoral thesis, Universitat de Girona, 2016. http://hdl.handle.net/10803/371162.

Full text
Abstract:
The limited dynamic range of digital images can be extended by composing different exposures of the same scene to produce HDR images. This thesis is composed of an overview of the state of the art techniques and three methods to tackle the image alignment and deghosting problems in the HDR imaging domain. The first method detects the areas affected by motion, registers the dynamic objects over a reference image, and combines low-dynamic range values to recover HDR values in the whole image. The second approach builds multiscopic HDR images from LDR multi-exposure images. It is based on a patch match algorithm which was adapted and improved to take advantage of epipolar geometry constraints of stereo images. The last method proposes to replace under/over exposed pixels in the reference image by using valid HDR values from other images in the multi-exposure LDR image sequence.
El limitado rango dinámico de las imágenes digitales puede ampliarse mezclando varias imágenes adquiridas con diferentes valores de exposición. Esta tesis incluye un detallado resumen del estado del arte y tres métodos diferentes para alinear las imágenes y corregir el efecto ’ghosting’ en imágenes HDR. El primer método está centrado en detectar las áreas afectadas por el movimiento y registrar los objetos dinámicos sobre una imagen de referencia de modo que se logre recuperar información a lo largo de toda la imagen. Nuestra segunda propuesta es un método para obtener imágenes HDR multiscópicas a partir de diferentes exposiciones LDR. Está basado en un algoritmo de ’patch match’ que ha sido adaptado para aprovechar las ventajas de las restricciones de la geometría epipolar de imágenes estéreo. Por último proponemos reemplazar los píxeles saturados en la imagen de referencia usando valores correctos de otras imágenes de la secuencia.
APA, Harvard, Vancouver, ISO, and other styles
8

Grilo, Frederico José Lapa. "Modelo de processamento de imagem, com múltiplas fontes de aquisição, para manipulação aplicada à domótica." Doctoral thesis, Universidade de Évora, 2019. http://hdl.handle.net/10174/25787.

Full text
Abstract:
Este trabalho foca-se em modelos de processamento de imagem para utilização na visão por computador. Modelos de processamento de imagem com multi-aquisição e/ou em multi-perspectiva, para um conhecimento do meio circundante, com possibilidade de comando e controlo na área da domótica e/ou robótica móvel. Os algoritmos desenvolvidos têm a capacidade de serem implementados em blocos de software ou hardware, de forma independente (autónomos), ou integrados como componentes de um sistema mais complexo. O desenvolvimento dos algoritmos privilegiou o seu elevado desempenho, constrangido pela minimização da carga computacional. Nos modelos de processamento de imagem desenvolvidos foram focados 4 tópicos fundamentais de investigação: a) detecção de movimento de objectos e seres humano em ambiente não controlado; b) detecção da face humana, a ser usada como variável de controlo (entre outras aplicações); c) capacidade de utilização de multi-fontes de aquisição e processamento de imagem, com diferentes condições de iluminação não controladas, integradas num sistema complexo com diversas topologias; d) capacidade de funcionamento de forma autónoma ou em rede distribuída, apenas comunicando resultados finais, ou integrados modularmente na solução final de sistemas complexos de aquisição de imagem. A implementação laboratorial, com teste em protótipos, foi ferramenta decisiva no melhoramento de todos os algoritmos desenvolvidos neste trabalho; IMAGE PROCESSING MODELS, WITH MULTIPLE ACQUISITION SOURCES, FOR MANIPULATION IN DOMOTICS Abstract: This work focuses on image processing models for computer vision. Image processing models with multi-acquisition and/or multi-perspective models were developed to acquire knowledge over the surrounding environment, allowing system control in the field of domotics and/or mobile robotics. The developed algorithms have the capacity to be implemented in software or hardware blocks, independently (autonomous), or integrated as a component in more complex systems. The development of the algorithms was focused on high performance constrained by the computational burden minimization. In the developed image processing models it were addressed 4 main research topics: a) movement detection of objects and human beings in an uncontrolled environment; b) detection of the human face to be used as a control variable (among other applications); c) possibility of using multi-sources of acquisition and image processing, with different uncontrolled lighting conditions, integrated into a complex system with different topologies; d) ability to work as an autonomous entity or as a node integrated on a distributed network, only transmitting final results, or integrated as a link in a complex image processing system. The laboratorial implementation, with prototype tests, was the main tool for the improvement of all developed algorithms, discussed in the present work
APA, Harvard, Vancouver, ISO, and other styles
9

Stein, Andrew Neil. "Adaptive image segmentation and tracking : a Bayesian approach." Thesis, Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/13397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

gorbov, sergey. "Practical Application of Fast Disk Analysis for Selective Data Acquisition." ScholarWorks@UNO, 2016. http://scholarworks.uno.edu/td/2230.

Full text
Abstract:
Using a forensic imager to produce a copy of the storage is a common practice. Due to the large volumes of the modern disks, the imaging may impose severe time overhead which ultimately delays the investigation process. We proposed automated disk analysis techniques that precisely identify regions on the disk that contain data. We also developed a high performance imager that produces AFFv3 images at rates exceeding 300MB/s. Using multiple disk analysis strategies we can analyze a disk within a few minutes and yet reduce the imaging time of by many hours. Partial AFFv3 images produced by our imager can be analyzed by existing digital forensics tools, which makes our approach to be easily incorporated into the workflow of practicing forensics investigators. The proposed approach renders feasible in the forensic environments where the time is critical constraint, as it provides significant performance boost, which facilitates faster investigation turnaround times and reduces case backlogs.
APA, Harvard, Vancouver, ISO, and other styles
11

Parmentier, Alain. "Acquisition de cartes denses pour la génération et le contrôle de formes vestimentaires." Valenciennes, 1994. https://ged.uphf.fr/nuxeo/site/esupversions/96d52b8e-37f9-4146-8eaf-405f79a9426f.

Full text
Abstract:
Cet expose concerne une application au contrôle de formes vestimentaires en confection. Un premier chapitre décrit des aspects liés au cadre d'application. L'élasticité, la souplesse des formes vestimentaires interdisent tout contrôle impliquant des palpeurs mécaniques. Un second chapitre consacre un état de l'art a des méthodes optiques ou ultrasonores. Un plan de traçage optique est retenu afin de contrôler des régions vestimentaires, à lentes variations de courbure et a réflectances lambertiennes. La pupille de sortie du montage, qui élargit un faisceau laser en un plan lumineux, s'apparente à une fente fine qui diffracte à l'infini. Décrit dans un troisième chapitre, le protocole expérimental présente plus particulièrement un atelier photographique, un calibre de formes et des modes d'habillage adaptés aux comportements textiles des tissus étudiés. Malgré un faible équipement, la production photographique est adaptée a des contraintes spécifiques. Un quatrième chapitre présente des outils de contrôle photographique et d'analyse d'images. La granularité laser affecte les images des motifs de lumière structurée. Localement, l'image de la trace du plan laser est identifiée par une corde. Cette approximation implique une discrimination radiométrique selon une loi statistique. Un filtrage morphologique permet d'isoler efficacement les images des motifs de lumière structurée. Un cinquième chapitre présente un environnement industriel logiciel requis par la conduite d'une application prototype, une évaluation expérimentale en fonction d'états de surfaces vestimentaires et des perspectives de développement. Indépendamment de leurs états de surfaces, le contrôle de formes vestimentaires dépend de spécifications morphologiques appropriées à des types de vêtements.
APA, Harvard, Vancouver, ISO, and other styles
12

Tsang, Kwok-hon, and 曾國瀚. "Design of an aperture-domain imaging method and signal acquisition hardware for ultrasound-based vector flow estimation." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B43572315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Ruggeri, Charles R. "High Strain Rate Data Acquisition of 2D Braided Composite Substructures." University of Akron / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=akron1255968114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Havlicek, Joseph P. "Median filtering for target detection in an airborne threat warning system." Thesis, Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/80083.

Full text
Abstract:
Detection of point targets and blurred point targets in midwave infrared imagery is difficult because few assumptions can be made concerning the characteristics of the background. In this thesis, real time spatial prefiltering algorithms that facilitate the detection of such targets in an airborne threat warning system are investigated. The objective of prefiltering is to pass target signals unattenuated while rejecting background and noise. The use of unsharp masking with median filter masking operators is recommended. Experiments involving simulated imagery are described, and the performance of median filter unsharp masking is found to be superior to that of the Laplacian filter, the linear point detection filter, and unsharp masking with a mean filter mask. A primary difficulty in implementing real time median filters is the design of a mechanism for extracting local order statistics from the input. By performing a space-time transformation on a standard selection network, a practical sorting architecture for this purpose is developed. A complete hardware median filter unsharp masking design with a throughput of 25.6 million bits per second is presented and recommended for use in the airborne threat warning system.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
15

Palmer, Jeremy L. "Teacher Training Via Digital Apprenticeship to Master Teachers of Arabic: Exposure, Reflection, and Replication as Instruments for Change in Novice Instructor Teaching Style." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd890.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Le, Floch Hervé. "Acquisition des images en microscopie electronique a balayage in situ." Toulouse 3, 1986. http://www.theses.fr/1986TOU30026.

Full text
Abstract:
Chaine d'acquisition du signal image du mebis. Etude des sources de bruit associees aux detecteurs a semiconducteur et mise au point d'un processus de realisation de diodes de detection a barriere de surface. Conception d'une carte electronique compatible avec un microordinateur. Cette carte permet la numerisation, le stockage sur disquette et la visualisation des images fournies par le mebis
APA, Harvard, Vancouver, ISO, and other styles
17

Van, der Walt Stefan Johann. "Super-resolution imaging." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/5189.

Full text
Abstract:
Thesis (PhD (Electronic Engineering))--University of Stellenbosch, 2010.
Contains bibliography and index.
ENGLISH ABSTRACT: Super-resolution imaging is the process whereby several low-resolution photographs of an object are combined to form a single high-resolution estimation. We investigate each component of this process: image acquisition, registration and reconstruction. A new feature detector, based on the discrete pulse transform, is developed. We show how to implement and store the transform efficiently, and how to match the features using a statistical comparison that improves upon correlation under mild geometric transformation. To simplify reconstruction, the imaging model is linearised, whereafter a polygon-based interpolation operator is introduced to model the underlying camera sensor. Finally, a large, sparse, over-determined system of linear equations is solved, using regularisation. The software developed to perform these computations is made available under an open source license, and may be used to verify the results.
AFRIKAANSE OPSOMMING: In super-resolusie beeldvorming word verskeie lae-resolusie foto's van 'n onderwerp gekombineer in 'n enkele, hoë-resolusie afskatting. Ons ondersoek elke stap van hierdie proses: beeldvorming, -belyning en hoë-resolusie samestelling. 'n Nuwe metode wat staatmaak op die diskrete pulstransform word ontwikkel om belangrike beeldkenmerke te vind. Ons wys hoe om die transform e ektief te bereken en hoe om resultate kompak te stoor. Die kenmerke word vergelyk deur middel van 'n statistiese model wat bestand is teen klein lineêre beeldvervormings. Met die oog op 'n vereenvoudigde samestellingsberekening word die beeldvormingsmodel gelineariseer. In die nuwe model word die kamerasensor gemodelleer met behulp van veelhoek-interpolasie. Uiteindelik word 'n groot, yl, oorbepaalde stelsel lineêre vergelykings opgelos met behulp van regularisering. Die sagteware wat vir hierdie berekeninge ontwikkel is, is beskikbaar onderhewig aan 'n oopbron-lisensie en kan gebruik word om die gegewe resultate te veri eer.
APA, Harvard, Vancouver, ISO, and other styles
18

Lopes, Daniel Pedro Ferreira. "Face verication for an access control system in unconstrained environment." Master's thesis, Universidade de Aveiro, 2017. http://hdl.handle.net/10773/23395.

Full text
Abstract:
Mestrado em Engenharia Eletrónica e Telecomunicações
O reconhecimento facial tem vindo a receber bastante atenção ao longo dos últimos anos não só na comunidade cientifica, como também no ramo comercial. Uma das suas várias aplicações e o seu uso num controlo de acessos onde um indivíduo tem uma ou várias fotos associadas a um documento de identificação (também conhecido como verificação de identidade). Embora atualmente o estado da arte apresente muitos estudos em que tanto apresentam novos algoritmos de reconhecimento como melhorias aos já desenvolvidos, existem mesmo assim muitos problemas ligados a ambientes não controlados, a aquisição de imagem e a escolha dos algoritmos de deteção e de reconhecimento mais eficazes. Esta tese aborda um ambiente desafiador para a verificação facial: um cenário não controlado para o acesso a infraestruturas desportivas. Uma vez que não existem condições de iluminação controladas nem plano de fundo controlado, isto torna um cenário complicado para a implementação de um sistema de verificação facial. Esta tese apresenta um estudo sobre os mais importantes algoritmos de detecção e reconhecimento facial assim como técnicas de pré-processamento tais como o alinhamento facial, a igualização de histograma, com o objetivo de melhorar a performance dos mesmos. Também em são apresentados dois métodos para a aquisição de imagens envolvendo a seleção de imagens e calibração da câmara. São apresentados resultados experimentais detalhados baseados em duas bases de dados criadas especificamente para este estudo. No uso de técnicas de pré-processamento apresentadas, foi possível presenciar melhorias até 20% do desempenho dos algoritmos de reconhecimento referentes a verificação de identidade. Com os métodos apresentados para os testes ao ar livre, foram conseguidas melhorias na ordem dos 30%.
Face Recognition has been received great attention over the last years, not only on the research community, but also on the commercial side. One of the many uses of face recognition is its use on access control systems where a person has one or several photos associated to an Identi cation Document (also known as identity veri cation). Although there are many studies nowadays, both presenting new algorithms or just improvements of the already developed ones, there are still many open problems regarding face recognition in uncontrolled environments, from the image acquisition conditions to the choice of the most e ective detection and recognition algorithms, just to name a few. This thesis addresses a challenging environment for face veri cation: an unconstrained environment for sports infrastructures access. As there are no controlled lightning conditions nor controlled background, this makes a di cult scenario to implement a face veri cation system. This thesis presents a study of some of the most important facial detection and recognition algorithms as well as some pre-processing techniques, such as face alignment and histogram equalization, with the aim to improve their performance. It also introduces some methods for a more e cient image acquisition based on image selection and camera calibration, specially designed for addressing this problem. Detailed experimental results are presented based on two new databases created speci cally for this study. Using pre-processing techniques, it was possible to improve the recognition algorithms performances up to 20% regarding veri cation results. With the methods presented for the outdoor tests, performances had improvements up to 30%
APA, Harvard, Vancouver, ISO, and other styles
19

Pereira, Taissa Alexandra Lourenço Gamito. "Digital image acquisition for ophthalmoscope." Master's thesis, 2011. http://hdl.handle.net/10316/17401.

Full text
Abstract:
This project was performed in order to satisfy a need – to provide an ophthalmoscope with digital data acquisition. In fact, most ophthalmoscopes do not contain the data recording ability. Therefore, the main advantages of a digital ophthalmoscope are, among others, better quality data sharing, the possibility of exam‟s reassessment and improved medical teaching. Moreover, this work and the concept of a possible solution were the product of on-going discussion between BlueWorks and clinical staff working at Coimbra University Hospital. Once the challenge was presented, several prototype‟s technical drawings were idealized in AutoCAD® 2011 environment and a number of experiments in optics laboratory were accomplished as well. In the end, the prototype developed contains a camera inside and it fits into the Panoptic™ ophthalmoscope. In order to verify its capacity to record retinal images, the prototype has been clinically tested and its outcomes have been markedly positive. Usually, a technical solution by itself does not mean success. In fact, its performance must create impact in order to be considered essential. In this particular case, ocular fundus pathologies should be detected and data should be acquired with best possible quality, this being ultimately the purpose of this scientific project. Taking into account its growing impact in medical community, the prototype is now being recommended in daily clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
20

Banerjee, Serene. "Composition-guided image acquisition." Thesis, 2004. http://hdl.handle.net/2152/1222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Wu, Gang. "Image Quality of Digital Breast Tomosynthesis: Optimization in Image Acquisition and Reconstruction." Thesis, 2014. http://hdl.handle.net/1807/65768.

Full text
Abstract:
Breast cancer continues to be the most frequently diagnosed cancer in Canadian women. Currently, mammography is the clinically accepted best modality for breast cancer detection and the regular use of screening has been shown to contribute to the reduced mortality. However, mammography suffers from several drawbacks which limit its sensitivity and specificity. As a potential solution, digital breast tomosynthesis (DBT) uses a limited number (typically 10-20) of low-dose x-ray projections to produce a three-dimensional tomographic representation of the breast. The reconstruction of DBT images is challenged by such incomplete sampling. The purpose of this thesis is to evaluate the effect of image acquisition parameters on image quality of DBT for various reconstruction techniques and to optimize these, with three specific goals: A) Develop a better power spectrum estimator for detectability calculation as a task-based image quality index; B) Develop a paired-view algorithm for artifact removal in DBT reconstruction; and C) Increase dose efficiency in DBT by reducing random noise. A better power spectrum estimator was developed using a multitaper technique, which yields reduced bias and variance in estimation compared to the conventional moving average method. This gives us an improved detectability measurement with finer frequency steps. The paired-view scheme in DBT reconstruction provides better image quality than the commonly used sequential method. A simple ordering like the “side-to-side” method can achieve less artifact and higher image quality in reconstructed slices. The new denoising algorithm developed was applied to the projection views acquired in DBT before reconstruction. The random noise was markedly removed while the anatomic details were maintained. With the help of this artifact-removal technique used in reconstruction and the denoising method employed on the projection views, the image quality of DBT is enhanced and lesions should be more readily detectable.
APA, Harvard, Vancouver, ISO, and other styles
22

Ho, Jason Ching-Hsien. "Gesture-Based Image Acquisition between Smartphone and Digital Signage." Thesis, 2011. http://hdl.handle.net/10012/5875.

Full text
Abstract:
Mobile phones have formed a social network within the phone subscriber population by allowing the phone subscribers to exchange information. Nowadays, smartphones have been improved with a variety of functionalities, such as a built-in cameras, motion sensors, and Wi-Fi wireless connectivity to enable the phone subscriber to take photographs of a desired object for distribution to other users through SMS or email. These functionalities make the mobile phones the perfect tool in terms of viral marketing within cellular phone subscribers. This thesis proposes a novel methodology that allows the phone subscriber to perform gestures for image acquisition from public signage displays. The public display is signage which displays a list of images in chronological order. The signage distributes the image list to nearby phones in the form of datagrams by means of multi-casts. Additionally, Wi-Fi connection between the phone and the signage must be established to enable multi-cast. When the phone has the completed image list downloaded, the phone subscriber can point the phone at the signage and perform a dragging gesture once he sees the desired image displayed by the signage. The current state of the project has concluded the development of the application to achieve the aforementioned task. However, the development of data transmission from one phone to another is still ongoing. Further development in the future would enable another gesture for data distribution to other phones in the vicinity. Web-based administration applications have also been developed to manage the image list in the signage. Through this web-based application, the administrator can generate new image list and then upload it to a FTP server. When the updated image list is stored in a remote FTP server, the signage periodically retrieves the image list from the FTP server. After the signage has received the updated image list, it then distributes the image list in the form of datagrams by means of multicasts. In summary, this thesis documents the impact of such technology in viral marketing research.
APA, Harvard, Vancouver, ISO, and other styles
23

LIN, YU-DE, and 林育德. "The digital system design on data acquisition of static image signal." Thesis, 1989. http://ndltd.ncl.edu.tw/handle/32304654532774612263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Chen, Chun Yu, and 陳俊宇. "The Study of Pixel Interpolation, Image Scaling, and Contrast Enhancement in Digital Image Acquisition Systems." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/71379206871737711398.

Full text
Abstract:
碩士
國立交通大學
電子工程系
89
In this thesis, we propose several approaches to do pixel interpolation, image scaling, edge enhancement, and brightness enhancement in digital image acquisition systems. About pixel interpolation, we propose a pattern-based method and an interpolation method based on polynomial fitting. These two approaches reduce the appearance of false color and lattice effect. About image scaling, we discuss both cubic spline interpolation and B-spline interpolation. About edge enhancement, we adjust the enhancement factor at each pixel according to the image content near that pixel. This approach can provide sharp edges, fewer ringing artifacts, and lower noise enhancement. About brightness enhancement, we select the pixels around edges and build a hitogram for hem. Then, by modifying the histogram in a proper way, this approach provides enhanced brightness without affecting the perception of the original image.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Jing-Zun, and 王靖尊. "Laser-pulse-synchronized image acquisition system the optimization of digital and analog circuits." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/g529dv.

Full text
Abstract:
碩士
國立臺灣大學
醫學工程學研究所
104
The routine human blood test is an important indicator in the evaluation of personal health. Analytical instruments such as flow cytometry are applied to count the numbers of erythrocytes, leukocytes, and platelets, and the numbers all carry their own significance in the differential diagnosis. However, we usually examine blood status by using invasive procedures such as blood sampling, which not only burdens patients, but also lead to the deterioration of the specimens during the delivery process, and may cause errors in health evaluations. With medical technology advances, many experimental non-invasive biomedical imaging technology methods have been developed. By using an interferometer, near infrared light, and the interference principle to image the biological tissue. Optical Coherence Tomography provides two-dimensional tomographic images of in vivo biological tissue with micron grade resolution. By using the pinhole to block unfocused light and the interference of scattering light, confocal microscopy provides biopsy images with sub-micron level resolution, Use of high tissue penetration depth of infrared light, these techniques can get the blood cell imaging in vivo without calibration.. Owing to the restriction of light diffraction limitations and tissue scattering, confocal microscopy cannot provide sub-micron resolution of clear images with tissues in depth; therefore, Lack of ability to identify various types of blood cells. Comparing harmonic generation microscopy to other optical tomographic microscopies, it is characterized by sub-micron three-dimensional resolution with tissues in depth. The research has been verified that can get human flow of blood cell images and interpret of the number of leukocytes in vivo, it is currently the most potential to develop into a non-invasive in vivo imaging flow cytometer technology. In addition, there is also no need to use dye during the examination. When there is any doubt of the percent composition of blood cells with flow cytometry, the patient’s blood specimen should be sent for further blood smear, which is where cells are stained to investigate the blood cell morphology. By using third harmonic generation microscopy (THG), there is no need to stain the blood cells before investigating while in the meantime the images can be saved. This method not only saves the examination time of the blood smear, but also presents the original morphology of the blood cells. To provide a stable light source for third harmonic generation microscopy, a 1150 nm femtosecond fiber laser system was built in our laboratory that is relatively insensitive, with temperature and humidity that are comparable to the Ti-Sapphire laser and Chromium-Doped Forsterite Laser. Four modes of analog signals are provided in the microscopy system, including second harmonic generation microscopy, third harmonic generation microscopy, multi-photon fluorescence microscopy, and confocal single-photon reflection microscopy. Today’s capture board that transform analog to digital cannot process the four types of the signal directly, and there is depletion phenomenon that takes place in the DC signals while transforming between the two signal types, causing a weakening of the signal amplitude. On the part of image capture, the pulse-repetition rate of laser light source is only 11.25 MHz. When shifting the focal point of the laser through fast-steering tilt-axis scan mirrors, (8 kHz) intervals time between points are larger. Thus, it is essential to capture the maximum of the every signal point, and to prevent weaker signal capture, which will cause reductions in image intensity. Therefore, to get high resolution and immediate investigations of the blood cells status, in respect to hardware, the norms of the microscopy system must be compatible with optical signals to preserve the DC signals. With respect to software, the analog signals of the specimens stimulated by the laser must be captured with the help of the synchronization of the laser signals to the get maximum level of analog signal to improve the contrast of the image. In this thesis, we use field-programmable gate array (FPGA) design imaging acquisition system. FPGA not only reduce development time, but also has high efficiency and reliability. By using the Phase-locked loops (PLL) in FPGA, we synchronized the sampling clock with the laser pulse, and designed to meet this FPGA board of the analog to digital converter board (ADCB). In addition, we programmed a graphical user interface with multi-channel 15Hz frame rate windows to display and restore the image. This interface can also be immediate changed FPGA parameters, such as imaging range, sampling clock frequency and laser pulse phase function. At the same time, we also provided another capture mode (XYT mode) for long video-recording that sets the video intervals flexibility. It can save single images or multiple images taken at certain time intervals over an average period, thus reducing the data size of the images saved. It can also provide basic image processing functions. Synchronized the sampling clock with the laser pulse and used our analog to digital converter board, we can clearly observe images of blood cells in mouse ear capillaries, Thereby reducing the difficulty of automated image interpretation. We hope that for future clinical applications, the non-invasive automatic evaluation of speed and number of blood cells using the fiber femtosecond laser microscope system could be faster and more precise using this synchronous acquisition system.
APA, Harvard, Vancouver, ISO, and other styles
26

Hou, Chi-Sheng, and 侯志陞. "Knowledge Acquisition for FCS : A Case Study on Automatic Thresholding of Digital Image." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/15239650591553961201.

Full text
Abstract:
碩士
大同工學院
資訊工程研究所
81
This research focuses on the development of fuzzy control rules for systems that are subject to external impacts usually difficult to model. First of all, knowledge acquisition for this type of control process is discussed. And then the membership functions of the input and output variables should be adjusted properly through a great number of trial-and-error experiments and analysis. With the success of this acquisi- tion, a fuzzy control system will then be both operational and functional. A case study of dynamic auto thresholding for digital image processing is used to illustrate the proposed
APA, Harvard, Vancouver, ISO, and other styles
27

Davies, A. G., Amber J. Gislason-Lee, A. R. Cowen, S. M. Kengyelics, M. Lupton, J. Moore, and M. Sivananthan. "Does the use of additional x-ray beam filtration during cine acquisition reduce clinical image quality and effective dose in cardiac interventional imaging?" 2014. http://hdl.handle.net/10454/16963.

Full text
Abstract:
Yes
The impact of spectral filtration in digital (‘cine’) acquisition was investigated using a flat panel cardiac interventional X-ray imaging system. A 0.1-mm copper (Cu) and 1.0-mm aluminium (Al) filter added to the standard acquisition mode created the filtered mode for comparison. Image sequences of 35 patients were acquired, a double-blind subjective image quality assessment was completed and dose–area product (DAP) rates were calculated. Entrance surface dose (ESD) and effective dose (E) rates were determined for 20- and 30-cm phantoms. Phantom ESD fell by 28 and 41 % and E by 1 and 0.7 %, for the 20- and 30-cm phantoms, respectively, when using the filtration. Patient DAP rates fell by 43 % with no statistically significant difference in clinical image quality. Adding 0.1-mm Cu and 1.0-mm Al filtration in acquisition substantially reduces patient ESD and DAP, with no significant change in E or clinical image quality.
Supported in part by a research grant from Philips Healthcare, The Netherlands.
APA, Harvard, Vancouver, ISO, and other styles
28

Gracia, Eric Enrique Flores De, and 艾利克. "Study of Chromatophores and their Pigmentation Pattern in the Atydae Shrimp Neocaridina denticulate (de Haan 1844) through Digital Image Acquisition and Processing." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/50735663718874202791.

Full text
Abstract:
碩士
國立臺灣海洋大學
水產養殖學系
95
According to Food and Agriculture Organization of United Nation, global ornamental fish market in 2000 worths 900 million USD whole sale, and 3 billion USD retail. In 2003, this industry is valued 14 billion USD including its peripheral industries. Body color and in some cases flesh color of fish is one of the most important quality criteria informing their market value. In this study, the methods for pigmentation assessment in aquatic animals was first reviewed and two experiments were conducted to test a new application for pigment assessment and determine the best area for pigmentation monitoring in atydae shrimp and to find out the effect of dietary carotenoids on pigmentation of juvenile Atydae shrimp. In the review the common methods for pigment assessment in aquatic animals were commented and compared. The methods can be grouped into: chemical methods, i.e., HPLC, TLC and Spectrophotometry, as they analyse quantitatively and/or qualitatively the carotenoid compound in the tissue; physical methods, i.e., colourimetry, sensory analysis, and digital image acquisition and processing (DIAP), as they evaluate the colour reflected by the pigments in the tissue. The use of chemical methods combined with physical methods can be a more comprehensive approach to reveal the correlation of pigment concentration and colour. A new application of DIAP described has a great potential for precise evaluation of pigmentation in all size aquatic animals both quantitatively and qualitatively without causing stress or damage to the animals. In the first experiment morphological and chromatic characteristics of chromatophores in carapace, somite and exopod of deep red, slightly red and pale colour groups of adult shrimps of Neocaridina denticulata were captured photographically and analysed digitally. Chromatophores in the deep red group were the strongest in colouration and sizes. The chromatophores on the anterior parts of the shrimp body (i.e. carapace and somite) were bigger in sizes and stronger in colour than in the exopod. The exopod was the most suitable area to perform pigmentation monitoring over the shrimp’s body surface. In the second experiment, diets containing three different carotenoid sources: natural (Phaffia yeast) and synthetic astaxanthin and a synthetic □-carotene and two concentrations (80 and 160 mg kg-1) in combinations were fed to juveniles shrimps of N. denticulata for 5 weeks. DIAP method was applied on the examination of morphological and chromatic parameters of chromatophores of the resulting shrimp. Either natural or synthetic astaxanthin at 160 mg kg-1 were effective in enhancing morphological and colour change in the chromatophores of the shrimp. β-carotene was less effective in pigmentation. Pigment deposition reached saturation after 3 weeks’ supplementation of carotenoid since the change in hue, redness, and dispersion of chromatophores had been greatly reduced. The new application of DIAP demonstrated to be a useful and precise tool in the study of chromatophores and colour in aquatic animals.
APA, Harvard, Vancouver, ISO, and other styles
29

Schwartz, Tal Shimon. "Data-guided statistical sparse measurements modeling for compressive sensing." Thesis, 2013. http://hdl.handle.net/10012/7418.

Full text
Abstract:
Digital image acquisition can be a time consuming process for situations where high spatial resolution is required. As such, optimizing the acquisition mechanism is of high importance for many measurement applications. Acquiring such data through a dynamically small subset of measurement locations can address this problem. In such a case, the measured information can be regarded as incomplete, which necessitates the application of special reconstruction tools to recover the original data set. The reconstruction can be performed based on the concept of sparse signal representation. Recovering signals and images from their sub-Nyquist measurements forms the core idea of compressive sensing (CS). In this work, a CS-based data-guided statistical sparse measurements method is presented, implemented and evaluated. This method significantly improves image reconstruction from sparse measurements. In the data-guided statistical sparse measurements approach, signal sampling distribution is optimized for improving image reconstruction performance. The sampling distribution is based on underlying data rather than the commonly used uniform random distribution. The optimal sampling pattern probability is accomplished by learning process through two methods - direct and indirect. The direct method is implemented for learning a nonparametric probability density function directly from the dataset. The indirect learning method is implemented for cases where a mapping between extracted features and the probability density function is required. The unified model is implemented for different representation domains, including frequency domain and spatial domain. Experiments were performed for multiple applications such as optical coherence tomography, bridge structure vibration, robotic vision, 3D laser range measurements and fluorescence microscopy. Results show that the data-guided statistical sparse measurements method significantly outperforms the conventional CS reconstruction performance. Data-guided statistical sparse measurements method achieves much higher reconstruction signal-to-noise ratio for the same compression rate as the conventional CS. Alternatively, Data-guided statistical sparse measurements method achieves similar reconstruction signal-to-noise ratio as the conventional CS with significantly fewer samples.
APA, Harvard, Vancouver, ISO, and other styles
30

Egoda, Gamage Ruwan Janapriya. "A high resolution 3D and color image acquisition system for long and shallow impressions in crime scenes." Thesis, 2014. http://hdl.handle.net/1805/5906.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
In crime scene investigations it is necessary to capture images of impression evidence such as tire track or shoe impressions. Currently, such evidence is captured by taking two-dimensional (2D) color photographs or making a physical cast of the impression in order to capture the three-dimensional (3D) structure of the information. This project aims to build a digitizing device that scans the impression evidence and generates (i) a high resolution three-dimensional (3D) surface image, and (ii) a co-registered two-dimensional (2D) color image. The method is based on active structured lighting methods in order to extract 3D shape information of a surface. A prototype device was built that uses an assembly of two line laser lights and a high-definition video camera that is moved at a precisely controlled and constant speed along a mechanical actuator rail in order to scan the evidence. A prototype software was also developed which implements the image processing, calibration, and surface depth calculations. The methods developed in this project for extracting the digitized 3D surface shape and 2D color images include (i) a self-contained calibration method that eliminates the need for pre-calibration of the device; (ii) the use of two colored line laser lights projected from two different angles to eliminate problems due to occlusions; and (iii) the extraction of high resolution color image of the impression evidence with minimal distortion.The system results in sub-millimeter accuracy in the depth image and a high resolution color image that is registered with the depth image. The system is particularly suitable for high quality images of long tire track impressions without the need for stitching multiple images.
APA, Harvard, Vancouver, ISO, and other styles
31

Oliveira, André Joaquim Barbosa de. "Desenvolvimento de software para aplicação no controlo e monitorização de plataforma móvel de recolha de bolas de golfe." Master's thesis, 2007. http://hdl.handle.net/1822/65408.

Full text
Abstract:
Dissertação submetida à Universidade do Minho para obtenção do grau de Mestre em Engenharia Electrónica Industrial e Computadores
O golfe é um desporto que necessita de grande prática e concentração. Para obter tais capacidades são criados campos de treinos onde existem entre outros, o treino de primeira batida designado por driving range que permitem ao jogador aperfeiçoar a posição de batida e o alcance máximo da bola. Este tipo de treino necessita de grande stock de bolas e um sistema de constante recolha de bolas de golfe simultaneamente à batida das mesmas, o que pode originar perigo para quem recolhe as bolas e elevada despesa para um elevado stock de bolas. Para reduzir estes inconvenientes desenvolveuse uma plataforma móvel de recolha autónoma de bolas de golfe com detecção das bolas localmente, com auxílio de visão computacional. O presente trabalho consiste no desenvolvimento de software de processamento de imagem para detecção de bolas de golfe, de maneira a direccionar a plataforma móvel para locais de maior concentração de bolas num campo de treinos de golfe. Desenvolvimento de software de monitorização remota do ambiente à volta do robô, através de diversos sensores – implementação de protocolos de comunicação entre três unidades de processamento (unidade de processamento contínuo, unidade de processamento de imagem e unidade de visualização remota). A metodologia de processamento de imagem usada baseia-se na procura de contornos de uma imagem de escala de cinzentos, com aplicação de filtros de suavização seguidos de filtros de realce de contornos. Depois de detectados contornos na imagem, são aplicados critérios de selecção dos contornos de modo a seleccionar apenas objectos (bolas de golfe) definidos através de características tais como raio, cor do ponto central, etc. Estas características são obtidas e calibradas através do estudo de várias imagens obtidas no campo de treinos de golfe a diferentes níveis de incidência solar.
The golf is an elitist sport that needs great practice and concentration. In order to get such capabilities training camps are created to practice, among others, training of first strike (called driving range) allowing the player to improve ball hit quality and also to achieve long distance balls. This type of training requires a huge stock of balls and a system to constantly collect the golf balls, without stopping the players, which can cause danger to anyone who collects the balls and also a higher budget due to the amount of balls in stock. To reduce these drawbacks an autonomous mobile platform has been developed to collect golf balls where the balls detection is made with the help of computer vision. This work consists of developing image processing software to detect the golf balls, in order to target the mobile platform for places of higher concentration of balls in a driving range. Software development for remote monitoring of the robot surroundings, through various sensors - implementation of protocols for communication between three units of processing (continuous processing unit, image processing unit and the remote display unit) was the second task. The image processing methodology used is based on the demand for grayscale image contours, applying smoothing filters followed by highlight of contours. After detecting the image contours, criteria for the selection of contours in order to select only objects (golf balls) are applied, defined by characteristics such as radius, color of the central pixel, and so on. These characteristics are obtained and calibrated through the study of various images obtained in the driving range at different levels of solar incidence.
APA, Harvard, Vancouver, ISO, and other styles
32

Ying-Chieh, Chen, and 陳英傑. "Image-Based Model Acquisitin and Interactive Rendering for building 3D digital Archives." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/96060191221566977822.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
93
We demonstrate a process and a system for building three-dimensional (3D) digital archives of museum artifacts. Our system allows the targeted audience to observe the digitized 3D objects from any angle and at any distance, while still conveys the textures and the material properties with high fidelity. Our system acquires input images mainly with a handheld digital camera. Therefore it is portable and easy to set up at the museum sites. The results we show include two digitized art pieces from the Yingko Ceramics Museum at Taipei County, Taiwan. Our system can also use a QuickTimeVR object movie as the initial input, which we demonstrate using the Jadeite Cabbage data from the National Palace Museum.
APA, Harvard, Vancouver, ISO, and other styles
33

Pereira, Carla Susana Silva. "SEM-Botany: development of a digital system for the acquisition, storing and processing images from the SEM." Master's thesis, 2007. http://hdl.handle.net/10316/12256.

Full text
Abstract:
Actualmente o Microscópio Electrónico de Varrimento (MEV) existente no Laboratório de Microscopia Electrónica do Departamento de Botânica da Faculdade de Ciências e Tecnologia da Universidade de Coimbra é da marca JEOL modelo JSM-5400, adquirido no início da década de 90. Este equipamento produz uma imagem num ecrã LCD que pode ser capturada usando um sistema fotográfico. No entanto, estas imagens (fotografias) são mais dificeis de partilhar, armazenar e conservar do que as imagens digitais. O principal objectivo deste projecto consiste no desenvolvimento de um sistema digital para a aquisição, armazenamento e processamento de imagens provenientes do MEV, que poderá ser substituir o actual sistema. O desenvolvimento deste novo sistema involve a especificação e instalação de um frame grabber, componente que envolve hardware, e a especificação, codificação e teste de uma aplicação desenvolvida em Matlab, componente que envolve software
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography