To see the other types of publications on this topic, follow the link: Image quality enhancement.

Dissertations / Theses on the topic 'Image quality enhancement'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 45 dissertations / theses for your research on the topic 'Image quality enhancement.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tummala, Sai Virali, and Veerendra Marni. "Comparison of Image Compression and Enhancement Techniques for Image Quality in Medical Images." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kotha, Aravind Eswar Ravi Raja, and Lakshmi Ratna Hima Rajitha Majety. "Performance Comparison of Image Enhancement Algorithms Evaluated on Poor Quality Images." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13880.

Full text
Abstract:
Many applications require automatic image analysis for different quality of the input images. In many cases, the quality of acquired images is suitable for the purpose of the application. However, in some cases the quality of the acquired image has to be modified according to needs of a specific application. A higher quality of the image can be achieved by Image Enhancement (IE) algorithms. The choice of IE technique is challenging as this choice varies with the application purpose. The goal of this research is to investigate the possibility of the selective application for the IE algorithms. The values of entropy and Peak Signal to Noise Ratio (PSNR) of the acquired image are considered as parameters for selectivity. Three algorithms such as Retinex, Bilateral filter and Bilateral tone adjustment have been chosen as IE techniques for evaluation in this work. Entropy and PSNR are used for the performance evaluation of selected IE algorithms. In this study, we considered the images from three fingerprint image databases as input images to investigate the algorithms. The decision to enhance an image in these databases by the considered algorithms is based on the empirically evaluated entropy and PSNR thresholds. Automatic Fingerprint Identification System (AFIS) has been selected as the application of interest. The evaluation results show that the performance of the investigated IE algorithms affects significantly the performance of AFIS. The second conclusion is that entropy and PSNR might be considered as indicators for required IE of the input image for AFIS.
APA, Harvard, Vancouver, ISO, and other styles
3

Pitkänen, P. (Perttu). "Automatic image quality enhancement using deep neural networks." Master's thesis, University of Oulu, 2019. http://jultika.oulu.fi/Record/nbnfioulu-201904101454.

Full text
Abstract:
Abstract. Photo retouching can significantly improve image quality and it is considered an essential part of photography. Traditionally this task has been completed manually with special image enhancement software. However, recent research utilizing neural networks has been proven to perform better in the automated image enhancement task compared to traditional methods. During the literature review of this thesis, multiple automatic neural-network-based image enhancement methods were studied, and one of these methods was chosen for closer examination and evaluation. The chosen network design has several appealing qualities such as the ability to learn both local and global enhancements, and its simple architecture constructed for efficient computational speed. This research proposes a novel dataset generation method for automated image enhancement research, and tests its usefulness with the chosen network design. This dataset generation method simulates commonly occurring photographic errors, and the original high-quality images can be used as the target data. This dataset design allows studying fixes for individual and combined aberrations. The underlying idea of this design choice is that the network would learn to fix these aberrations while producing aesthetically pleasing and consistent results. The quantitative evaluation proved that the network can learn to counter these errors, and with greater effort, it could also learn to enhance all of these aspects simultaneously. Additionally, the network’s capability of learning local and portrait specific enhancement tasks were evaluated. The models can apply the effect successfully, but the results did not gain the same level of accuracy as with global enhancement tasks. According to the completed qualitative survey, the images enhanced by the proposed general enhancement model can successfully enhance the image quality, and it can perform better than some of the state-of-the-art image enhancement methods.Automaattinen kuvanlaadun parantaminen käyttämällä syviä neuroverkkoja. Tiivistelmä. Manuaalinen valokuvien käsittely voi parantaa kuvanlaatua huomattavasti ja sitä pidetään oleellisena osana valokuvausprosessia. Perinteisesti tätä tehtävää varten on käytetty erityisiä manuaalisesti operoitavia kuvankäsittelyohjelmia. Nykytutkimus on kuitenkin todistanut neuroverkkojen paremmuuden automaattisessa kuvanparannussovelluksissa perinteisiin menetelmiin verrattuna. Tämän diplomityön kirjallisuuskatsauksessa tutkittiin useita neuroverkkopohjaisia kuvanparannusmenetelmiä, ja yksi näistä valittiin tarkempaa tutkimusta ja arviointia varten. Valitulla verkkomallilla on useita vetoavia ominaisuuksia, kuten paikallisten sekä globaalien kuvanparannusten oppiminen ja sen yksinkertaistettu arkkitehtuuri, joka on rakennettu tehokasta suoritusnopeutta varten. Tämä tutkimus esittää uuden opetusdatan generointimenetelmän automaattisia kuvanparannusmetodeja varten, ja testaa sen soveltuvuutta käyttämällä valittua neuroverkkorakennetta. Tämä opetusdatan generointimenetelmä simuloi usein esiintyviä valokuvauksellisia virheitä, ja alkuperäisiä korkealaatuisia kuvia voi käyttää opetuksen tavoitedatana. Tämän generointitavan avulla voitiin tutkia erillisten valokuvausvirheiden, sekä näiden yhdistelmän korjausta. Tämän menetelmän tarkoitus oli opettaa verkkoa korjaamaan erilaisia virheitä sekä tuottamaan esteettisesti miellyttäviä ja yhtenäisiä tuloksia. Kvalitatiivinen arviointi todisti, että käytetty neuroverkko kykenee oppimaan erillisiä korjauksia näille virheille. Neuroverkko pystyy oppimaan myös mallin, joka korjaa kaikkia ennalta määrättyjä virheitä samanaikaisesti, mutta alhaisemmalla tarkkuudella. Lisäksi neuroverkon kyvykkyyttä oppia paikallisia muotokuvakohtaisia kuvanparannuksia arvioitiin. Koulutetut mallit pystyvät myös toteuttamaan paikallisen kuvanparannuksen onnistuneesti, mutta nämä mallit eivät yltäneet globaalien parannusten tasolle. Toteutetun kyselytutkimuksen mukaan esitetty yleisen kuvanparannuksen malli pystyy parantamaan kuvanlaatua onnistuneesti, sekä tuottaa parempia tuloksia kuin osa vertailluista kuvanparannustekniikoista.
APA, Harvard, Vancouver, ISO, and other styles
4

Headlee, Jonathan Michael. "A No-reference Image Enhancement Quality Metric and Fusion Technique." University of Dayton / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1428755761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ozyurek, Serkan. "Image Dynamic Range Enhancement." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613603/index.pdf.

Full text
Abstract:
In this thesis, image dynamic range enhancement methods are studied in order to solve the problem of representing high dynamic range scenes with low dynamic range images. For this purpose, two main image dynamic range enhancement methods, which are high dynamic range imaging and exposure fusion, are studied. More detailed analysis of exposure fusion algorithms are carried out because the whole enhancement process in the exposure fusion is performed in low dynamic range, and they do not need any prior information about input images. In order to evaluate the performances of exposure fusion algorithms, both objective and subjective quality metrics are used. Moreover, the correlation between the objective quality metrics and subjective ratings is studied in the experiments.
APA, Harvard, Vancouver, ISO, and other styles
6

Hettiarachchi, Don Lahiru Nirmal Manikka. "An Accelerated General Purpose No-Reference Image Quality Assessment Metric and an Image Fusion Technique." University of Dayton / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1470048998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hinduja, Saurabh. "Pedestrian Detection in Low Quality Moving Camera Videos." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6514.

Full text
Abstract:
Pedestrian detection is one of the most researched areas in computer vision and is rapidly gaining importance with the emergence of autonomous vehicles and steering assistance technology. Much work has been done in this field, ranging from the collection of extensive datasets to benchmarking of new technologies, but all the research depends on high-quality hardware such as high-resolution cameras, Light Detection and Ranging (LIDAR) and radar. For detection in low-quality moving camera videos, we use image deblurring techniques to reconstruct image frames and use existing pedestrian detection algorithms and compare our results with the leading research done in this area.
APA, Harvard, Vancouver, ISO, and other styles
8

Cai, Hongmin. "Quality enhancement and segmentation for biomedical images." Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/hkuto/record/B39380130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cai, Hongmin, and 蔡宏民. "Quality enhancement and segmentation for biomedical images." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B39380130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Arici, Tarik. "Single and multi-frame video quality enhancement." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29722.

Full text
Abstract:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Yucel Altunbasak; Committee Member: Brani Vidakovic; Committee Member: Ghassan AlRegib; Committee Member: James Hamblen; Committee Member: Russ Mersereau. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
11

Geijer, Håkan. "Radiation dose and image quality in diagnostic radiology : optimization of the dose - image quality relationship with clinical experience from scoliosis radiography, coronary intervention and a flat-panel digital detector /." Linköping : Univ, 2001. http://www.bibl.liu.se/liupubl/disp/disp2001/med706s.htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

KIM, CHEOL-SUNG. "DIGITAL COLOR IMAGE ENHANCEMENT BASED ON LUMINANCE & SATURATION." Diss., The University of Arizona, 1987. http://hdl.handle.net/10150/184228.

Full text
Abstract:
This dissertation analyzes the different characteristics of color images compared to monochromatic images, combines these characteristics with monochromatic image enhancement techniques, and proposes useful color image enhancement algorithms. Luminance, hue, and saturation (L-H-S) color space is selected for color image enhancement. Color luminance is shown to play the most important role in achieving good image enhancement. Color saturation also exhibits unique features which contribute to the enhancement of high frequency details and color contrast. The local windowing method, one of the most popular image processing techniques, is rigorously analyzed for the effects of window size or weighting values on the visual appearance of an image, and the subjective enhancement afforded by local image processing techniques is explained in terms of the human vision system response. The digital color image enhancement algorithms proposed are based on the observation that the enhanced luminance image results in a good color image in L-H-S color space when the chromatic components (hue, and saturation) are kept the same. The saturation component usually contains high frequency details that are not present in the luminance component. However, processing only the saturation, while keeping the luminance and the hue unchanged, is not satisfactory because the response of human vision system presents a low pass filter to the chromatic components. To exploit high frequency details of the saturation component, we take the high frequency component of the inverse saturation image, which correlates with the luminance image, and process the luminance image proportionally to this inverse saturation image. These proposed algorithms are simple to implement. The main three application areas in image enhancement: contrast enhancement, sharpness enhancement, and noise smoothing, are discussed separately. The computer processing algorithms are restricted to those which preserve the natural appearance of the scene.
APA, Harvard, Vancouver, ISO, and other styles
13

de, Silva Manawaduge Supun Samudika. "An Approach to Utilize a No-Reference Image Quality Metric and Fusion Technique for the Enhancement of Color Images." University of Dayton / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1470049079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Gunturk, Bahadir K. "Multi-frame information fusion for image and video enhancement." Diss., Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04072004-180015/unrestricted/gunturk%5Fbahadir%5Fk%5F200312%5Fphd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Youmaran, Richard. "Algorithms to Process and Measure Biometric Information Content in Low Quality Face and Iris Images." Thesis, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/19729.

Full text
Abstract:
Biometric systems allow identification of human persons based on physiological or behavioral characteristics, such as voice, handprint, iris or facial characteristics. The use of face and iris recognition as a way to authenticate user’s identities has been a topic of research for years. Present iris recognition systems require that subjects stand close (<2m) to the imaging camera and look for a period of about three seconds until the data are captured. This cooperative behavior is required in order to capture quality images for accurate recognition. This will eventually restrict the amount of practical applications where iris recognition can be applied, especially in an uncontrolled environment where subjects are not expected to cooperate such as criminals and terrorists, for example. For this reason, this thesis develops a collection of methods to deal with low quality face and iris images and that can be applied for face and iris recognition in a non-cooperative environment. This thesis makes the following main contributions: I. For eye and face tracking in low quality images, a new robust method is developed. The proposed system consists of three parts: face localization, eye detection and eye tracking. This is accomplished using traditional image-based passive techniques such as shape information of the eye and active based methods which exploit the spectral properties of the pupil under IR illumination. The developed method is also tested on underexposed images where the subject shows large head movements. II. For iris recognition, a new technique is developed for accurate iris segmentation in low quality images where a major portion of the iris is occluded. Most existing methods perform generally quite well but tend to overestimate the occluded regions, and thus lose iris information that could be used for identification. This information loss is potentially important in the covert surveillance applications we consider in this thesis. Once the iris region is properly segmented using the developed method, the biometric feature information is calculated for the iris region using the relative entropy technique. Iris biometric feature information is calculated using two different feature decomposition algorithms based on Principal Component Analysis (PCA) and Independent Component Analysis (ICA). III. For face recognition, a new approach is developed to measure biometric feature information and the changes in biometric sample quality resulting from image degradations. A definition of biometric feature information is introduced and an algorithm to measure it proposed, based on a set of population and individual biometric features, as measured by a biometric algorithm under test. Examples of its application were shown for two different face recognition algorithms based on PCA (Eigenface) and Fisher Linear Discriminant (FLD) feature decompositions.
APA, Harvard, Vancouver, ISO, and other styles
16

Rehm, Kelly. "Development and image quality assessment of a contrast-enhancement algorithm for display of digital chest radiographs." Diss., The University of Arizona, 1992. http://hdl.handle.net/10150/185844.

Full text
Abstract:
This dissertation presents a contrast-enhancement algorithm called Artifact-Suppressed Adaptive Histogram Equalization (ASAHE). This algorithm was developed as part of a larger effort to replace the film radiographs currently used in radiology departments with digital images. Among the expected benefits of digital radiology are improved image management and greater diagnostic accuracy. Film radiographs record X-ray transmission data at high spatial resolution, and a wide dynamic range of signal. Current digital radiography systems record an image at reduced spatial resolution and with coarse sampling of the available dynamic range. These reductions have a negative impact on diagnostic accuracy. The contrast-enhancement algorithm presented in this dissertation is designed to boost diagnostic accuracy of radiologists using digital images. The ASAHE algorithm is an extension of an earlier technique called Adaptive Histogram Equalization (AHE). The AHE algorithm is unsuitable for chest radiographs because it over-enhances noise, and introduces boundary artifacts. The modifications incorporated in ASAHE suppress the artifacts and allow processing of chest radiographs. This dissertation describes the psychophysical methods used to evaluate the effects of processing algorithms on human observer performance. An experiment conducted with anthropomorphic phantoms and simulated nodules showed the ASAHE algorithm to be superior for human detection of nodules when compared to a computed radiography system's algorithm that is in current use. An experiment conducted using clinical images demonstrating pneumothoraces (partial lung collapse) indicated no difference in human observer accuracy when ASAHE images were compared to computed radiography images, but greater ease of diagnosis when ASAHE images were used. These results provide evidence to suggest that Artifact-Suppressed Adaptive Histogram Equalization can be effective in increasing diagnostic accuracy and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
17

Lluis-Gomez, Alexis L. "Algorithms for the enhancement of dynamic range and colour constancy of digital images & video." Thesis, Loughborough University, 2015. https://dspace.lboro.ac.uk/2134/19580.

Full text
Abstract:
One of the main objectives in digital imaging is to mimic the capabilities of the human eye, and perhaps, go beyond in certain aspects. However, the human visual system is so versatile, complex, and only partially understood that no up-to-date imaging technology has been able to accurately reproduce the capabilities of the it. The extraordinary capabilities of the human eye have become a crucial shortcoming in digital imaging, since digital photography, video recording, and computer vision applications have continued to demand more realistic and accurate imaging reproduction and analytic capabilities. Over decades, researchers have tried to solve the colour constancy problem, as well as extending the dynamic range of digital imaging devices by proposing a number of algorithms and instrumentation approaches. Nevertheless, no unique solution has been identified; this is partially due to the wide range of computer vision applications that require colour constancy and high dynamic range imaging, and the complexity of the human visual system to achieve effective colour constancy and dynamic range capabilities. The aim of the research presented in this thesis is to enhance the overall image quality within an image signal processor of digital cameras by achieving colour constancy and extending dynamic range capabilities. This is achieved by developing a set of advanced image-processing algorithms that are robust to a number of practical challenges and feasible to be implemented within an image signal processor used in consumer electronics imaging devises. The experiments conducted in this research show that the proposed algorithms supersede state-of-the-art methods in the fields of dynamic range and colour constancy. Moreover, this unique set of image processing algorithms show that if they are used within an image signal processor, they enable digital camera devices to mimic the human visual system s dynamic range and colour constancy capabilities; the ultimate goal of any state-of-the-art technique, or commercial imaging device.
APA, Harvard, Vancouver, ISO, and other styles
18

Kong, Xiang. "Optimization of image quality and minimization of radiation dose for chest computed radiography." Oklahoma City : [s.n.], 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
19

Bhattacharya, Abhishek. "Affect-based Modeling and its Application in Multimedia Analysis Problems." FIU Digital Commons, 2012. http://digitalcommons.fiu.edu/etd/713.

Full text
Abstract:
The multimedia domain is undergoing a rapid development phase with transition in audio, image, and video systems such as VoIP, Telepresence, Live/On-Demand Internet Streaming, SecondLife, and many more. In such a situation, the analysis of multimedia systems, like retrieval, quality evaluation, enhancement, summarization, and re-targeting applications, from various context is becoming critical. Current methods for solving the above-mentioned analysis problems do not consider the existence of humans and their affective characteristics in the design methodology. This contradicts the fact that most of the digital media is consumed only by the human end-users. We believe incorporating human feedback during the design and adaptation stage is key to the building process of multimedia systems. In this regard, we observe that affect is an important indicator of human perception and experience. This can be exploited in various ways for designing effective systems that will adapt more closely to the human response. We advocate an affect-based modeling approach for solving multimedia analysis problems by exploring new directions. In this dissertation, we select two representative multimedia analysis problems, e.g. Quality-of-Experience (QoE) evaluation and Image Enhancement in order to derive solutions based on affect-based modeling techniques. We formulate specific hypothesis for them by correlating system parameters to user's affective response, and investigate their roles under varying conditions for each respective scenario. We conducted extensive user studies based on human-to-human interaction through an audio conferencing system.We also conducted user studies based on affective enhancement of images and evaluated the effectiveness of our proposed approaches. Moving forward, multimedia systems will become more media-rich, interactive, and sophisticated and therefore effective solutions for quality, retrieval, and enhancement will be more challenging. Our work thus represents an important step towards the application of affect-based modeling techniques for the future generation of multimedia systems.
APA, Harvard, Vancouver, ISO, and other styles
20

Jacome, Victor Roland. "Evaluation of dose and image quality parameters for cone-beam CT localization protocols in radiation therapy." Oklahoma City : [s.n.], 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
21

Yokota, Yusuke. "Evaluation of Image Quality of Pituitary Dynamic Contrast-Enhanced MRI Using Time-Resolved Angiography With Interleaved Stochastic Trajectories (TWIST) and Iterative Reconstruction TWIST (IT-TWIST)." Kyoto University, 2020. http://hdl.handle.net/2433/259011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Mandal, Subhamoy [Verfasser], Vasilis [Akademischer Betreuer] Ntziachristos, Jörg [Gutachter] Conradt, and Vasilis [Gutachter] Ntziachristos. "Visual Quality Enhancement in Optoacoustic Tomography : Methods in Multiscale Imaging and Image Processing / Subhamoy Mandal ; Gutachter: Jörg Conradt, Vasilis Ntziachristos ; Betreuer: Vasilis Ntziachristos." München : Universitätsbibliothek der TU München, 2018. http://d-nb.info/1165227282/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Ullman, Gustaf. "Quantifying image quality in diagnostic radiology using simulation of the imaging system and model observers." Doctoral thesis, Linköping : Department of Medicine and Health, Linköping University, 2008. http://www.bibl.liu.se/liupubl/disp/disp2008/med1050s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Hessel, Charles. "La décomposition automatique d'une image en base et détail : Application au rehaussement de contraste." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLN017/document.

Full text
Abstract:
Dans cette thèse CIFRE en collaboration entre le Centre de Mathématiques et de leurs Applications, École Normale Supérieure de Cachan et l’entreprise DxO, nous abordons le problème de la décomposition additive d’une image en base et détail. Une telle décomposition est un outil fondamental du traitement d’image. Pour une application à la photographie professionnelle dans le logiciel DxO Photolab, il est nécessaire que la décomposition soit exempt d’artefact. Par exemple, dans le contexte de l’amélioration de contraste, où la base est réduite et le détail augmenté, le moindre artefact devient fortement visible. Les distorsions de l’image ainsi introduites sont inacceptables du point de vue d’un photographe.L’objectif de cette thèse est de trouver et d’étudier les filtres les plus adaptés pour effectuer cette tâche, d’améliorer les meilleurs et d’en définir de nouveaux. Cela demande une mesure rigoureuse de la qualité de la décomposition en base plus détail. Nous examinons deux artefact classiques (halo et staircasing) et en découvrons trois autres types tout autant cruciaux : les halos de contraste, le cloisonnement et les halos sombres. Cela nous conduit à construire cinq mire adaptées pour mesurer ces artefacts. Nous finissons par classer les filtres optimaux selon ces mesures, et arrivons à une décision claire sur les meilleurs filtres. Deux filtres sortent du rang, dont un proposé dans cette thèse
In this CIFRE thesis, a collaboration between the Center of Mathematics and their Applications, École Normale Supérieure de Cachan and the company DxO, we tackle the problem of the additive decomposition of an image into base and detail. Such a decomposition is a fundamental tool in image processing. For applications to professional photo editing in DxO Photolab, a core requirement is the absence of artifacts. For instance, in the context of contrast enhancement, in which the base is reduced and the detail increased, minor artifacts becomes highly visible. The distortions thus introduced are unacceptable from the point of view of a photographer.The objective of this thesis is to single out and study the most suitable filters to perform this task, to improve the best ones and to define new ones. This requires a rigorous measure of the quality of the base plus detail decomposition. We examine two classic artifacts (halo and staircasing) and discover three more sorts that are equally crucial: contrast halo, compartmentalization, and the dark halo. This leads us to construct five adapted patterns to measure these artifacts. We end up ranking the optimal filters based on these measurements, and arrive at a clear decision about the best filters. Two filters stand out, including one we propose
APA, Harvard, Vancouver, ISO, and other styles
25

Al, Chami Zahi. "Estimation de la qualité des données multimedia en temps réel." Thesis, Pau, 2021. http://www.theses.fr/2021PAUU3066.

Full text
Abstract:
Au cours de la dernière décennie, les fournisseurs de données ont généré et diffusé une grande quantité de données, notamment des images, des vidéos, de l'audio, etc. Dans cette thèse, nous nous concentrerons sur le traitement des images puisqu'elles sont les plus communément partagées entre les utilisateurs sur l'inter-réseau mondial. En particulier, le traitement des images contenant des visages a reçu une grande attention en raison de ses nombreuses applications, telles que les applications de divertissement et de médias sociaux. Cependant, plusieurs défis pourraient survenir au cours de la phase de traitement et de transmission : d'une part, le nombre énorme d'images partagées et produites à un rythme rapide nécessite un temps de traitement et de livraison considérable; d’autre part, les images sont soumises à un très grand nombre de distorsions lors du traitement, de la transmission ou de la combinaison de nombreux facteurs qui pourraient endommager le contenu des images. Deux contributions principales sont développées. Tout d'abord, nous présentons un framework d'évaluation de la qualité d'image ayant une référence complète en temps réel, capable de : 1) préserver le contenu des images en s'assurant que certaines informations visuelles utiles peuvent toujours être extraites de l'image résultante, et 2) fournir un moyen de traiter les images en temps réel afin de faire face à l'énorme quantité d'images reçues à un rythme rapide. Le framework décrit ici est limité au traitement des images qui ont accès à leur image de référence (connu sous le nom référence complète). Dans notre second chapitre, nous présentons un framework d'évaluation de la qualité d'image sans référence en temps réel. Il a les capacités suivantes : a) évaluer l'image déformée sans avoir recours à son image originale, b) préserver les informations visuelles les plus utiles dans les images avant de les publier, et c) traiter les images en temps réel, bien que les modèles d'évaluation de la qualité des images sans référence sont considérés très complexes. Notre framework offre plusieurs avantages par rapport aux approches existantes, en particulier : i. il localise la distorsion dans une image afin d'évaluer directement les parties déformées au lieu de traiter l'image entière, ii. il a un compromis acceptable entre la précision de la prédiction de qualité et le temps d’exécution, et iii. il pourrait être utilisé dans plusieurs applications, en particulier celles qui fonctionnent en temps réel. L'architecture de chaque framework est présentée dans les chapitres tout en détaillant les modules et composants du framework. Ensuite, un certain nombre de simulations sont faites pour montrer l'efficacité de nos approches pour résoudre nos défis par rapport aux approches existantes
Over the past decade, data providers have been generating and streaming a large amount of data, including images, videos, audio, etc. In this thesis, we will be focusing on processing images since they are the most commonly shared between the users on the global inter-network. In particular, treating images containing faces has received great attention due to its numerous applications, such as entertainment and social media apps. However, several challenges could arise during the processing and transmission phase: firstly, the enormous number of images shared and produced at a rapid pace requires a significant amount of time to be processed and delivered; secondly, images are subject to a wide range of distortions during the processing, transmission, or combination of many factors that could damage the images’content. Two main contributions are developed. First, we introduce a Full-Reference Image Quality Assessment Framework in Real-Time, capable of:1) preserving the images’content by ensuring that some useful visual information can still be extracted from the output, and 2) providing a way to process the images in real-time in order to cope with the huge amount of images that are being received at a rapid pace. The framework described here is limited to processing those images that have access to their reference version (a.k.a Full-Reference). Secondly, we present a No-Reference Image Quality Assessment Framework in Real-Time. It has the following abilities: a) assessing the distorted image without having its distortion-free image, b) preserving the most useful visual information in the images before publishing, and c) processing the images in real-time, even though the No-Reference image quality assessment models are considered very complex. Our framework offers several advantages over the existing approaches, in particular: i. it locates the distortion in an image in order to directly assess the distorted parts instead of processing the whole image, ii. it has an acceptable trade-off between quality prediction accuracy and execution latency, andiii. it could be used in several applications, especially these that work in real-time. The architecture of each framework is presented in the chapters while detailing the modules and components of the framework. Then, a number of simulations are made to show the effectiveness of our approaches to solve our challenges in relation to the existing approaches
APA, Harvard, Vancouver, ISO, and other styles
26

Boudjenouia, Fouad. "Restauration d’images avec critères orientés qualité." Thesis, Orléans, 2017. http://www.theses.fr/2017ORLE2031/document.

Full text
Abstract:
Cette thèse concerne la restauration aveugle d’images (formulée comme un problème inverse mal-posé et mal-conditionné), en considérant particulièrement les systèmes SIMO. Dans un premier temps une technique d’identification aveugle de ce système où l’ordre du canal est inconnu (surestimé) est introduite. Nous introduisons d’abord une version simplifiée à coût réduit SCR de la méthode des relations croisées (CR). Ensuite, une version robuste R-SCR basée sur la recherche d’une solution parcimonieuse minimisant la fonction de coût CR est proposée. La restauration d’image est ensuite assurée par une nouvelle approche inspirée des techniques de décodage des signaux 1D et étendue ici aux cas de la restauration d’images en se basant sur une recherche arborescente efficace (algorithme ‘Stack’). Plusieurs améliorations de la méthode ‘Stack’ ont été introduites afin de réduire sa complexité et d’améliorer la qualité de restauration lorsque les images sont fortement bruitées. Ceci en utilisant une technique de régularisation et une approche d’optimisation all-at-once basée sur la descente du gradient qui permet de raffiner l’image estimée et mieux converger vers la solution optimale. Ensuite, les mesures de la qualité d’images sont utilisées comme fonctions de coûts (intégrées dans le critère global) et ce afin d’étudier leur potentiel pour améliorer les performances de restauration. Dans le contexte où l’image d’intérêt est corrompue par d’autres images interférentes, sa restauration nécessite le recours aux techniques de séparation aveugle de sources. Pour cela, une étude comparative de certaines techniques de séparation basées sur la propriété de décorrélation au second ordre et la parcimonie est réalisée
This thesis concerns the blind restoration of images (formulated as an ill-posed and illconditioned inverse problem), considering a SIMO system. Thus, a blind system identification technique in which the order of the channel is unknown (overestimated) is introduced. Firstly, a simplified version at reduced cost SCR of the cross relation (CR) method is introduced. Secondly, a robust version R-SCR based on the search for a sparse solution minimizing the CR cost function is proposed. Image restoration is then achieved by a new approach (inspired from 1D signal decoding techniques and extended here to the case of 2D images) based on an efficient tree search (Stack algorithm). Several improvements to the ‘Stack’ method have been introduced in order to reduce its complexity and to improve the restoration quality when the images are noisy. This is done using a regularization technique and an all-at-once optimization approach based on the gradient descent which refines the estimated image and improves the algorithm’s convergence towards the optimal solution. Then, image quality measurements are used as cost functions (integrated in the global criterion), in order to study their potential for improving restoration performance. In the context where the image of interest is corrupted by other interfering images, its restoration requires the use of blind sources separation techniques. In this sense, a comparative study of some separation techniques based on the property of second-order decorrelation and sparsity is performed
APA, Harvard, Vancouver, ISO, and other styles
27

Akit, Mert. "Pedestrian Experiences In Bahcelievler 7th Street: Setting The Design Criteria For The Enhancement Of Urban Public Realm." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12604740/index.pdf.

Full text
Abstract:
This thesis aims to set out an urban design framework, based on pedestrian experiences and pedestrian spaces, in order to take up streets to design or enhance them as pedestrian friendly urban public places. This could also be considered as a model of approach, which assumes a normative manner. Pedestrian urban places are surveyed, then analyses are drawn that will lead to design. In that framework, the study first summarizes theoretical concepts of urbanity, urban quality and pedestrian experiences, which are necessary for examining these places. Then, it puts out how an urban place is examined with respect to the three main headings, which constitute the components of urban places: urban form, urban image and urban activity. The study area, 7th Street in Bahç
elievler, has become a secondary centre with its vitality and the diversity of activities attracting many people from other districts besides local residents. However, initially planned within a housing cooperative, the neighbourhood has lost much from its cultural and urban accumulation due to global dynamics based on consumption. What is more, 7th Street is quite inadequate in providing an easy circulation both for pedestrians and vehicles as well as providing a quality urban place with its every element. Hence, the street has been examined with the above framework. This is done first with respect to the above mentioned components, and then with the information based on maps, photographs, personal observations and questionnaires which are done in order to find out the problems and characteristics of the users as well as their perceptive qualities. The conclusions together with strengths and weaknesses, which are derived from these surveys, have been used to set specific design guidelines for the area.
APA, Harvard, Vancouver, ISO, and other styles
28

Trčka, Jan. "Zlepšování kvality digitalizovaných textových dokumentů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-417278.

Full text
Abstract:
The aim of this work is to increase the accuracy of the transcription of text documents. This work is mainly focused on texts printed on degraded materials such as newspapers or old books. To solve this problem, the current method and problems associated with text recognition are analyzed. Based on the acquired knowledge, the implemented method based on GAN network architecture is chosen. Experiments are a performer on these networks in order to find their appropriate size and their learning parameters. Subsequently, testing is performed to compare different learning methods and compare their results. Both training and testing is a performer on an artificial data set. Using implemented trained networks increases the transcription accuracy from 65.61 % for the raw damaged text lines to 93.23 % for lines processed by this network.
APA, Harvard, Vancouver, ISO, and other styles
29

Jonsson, Christian. "Detection of annual rings in wood." Thesis, Linköping University, Department of Science and Technology, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15804.

Full text
Abstract:

This report describes an annual line detection algorithm for the WoodEye quality control system. The goal with the algorithm is to find the positions of annual lines on the four surfaces of a board. The purpose is to use this result to find the inner annual ring structure of the board. The work was done using image processing techniques to analyze images collected with WoodEye. The report gives the reader an insight in the requirements of quality control systems in the woodworking industry and the benefits of automated quality control versus manual inspection. The appearance and formation of annual lines are explained on a detailed level to provide insight on how the problem should be approached. A comparison between annual rings and fingerprints are made to see if ideas from this area of pattern recognition can be adapted to annual line detection. This comparison together with a study of existing methods led to the implementation of a fingerprint enhancement method. This method became a central part of the annual line detection algorithm. The annual line detection algorithm consists of two main steps; enhancing the edges of the annual rings, and tracking along the edges to form lines. Different solutions for components of the algorithm were tested to compare performance. The final algorithm was tested with different input images to find if the annual line detection algorithm works best with images from a grayscale or an RGB camera.

APA, Harvard, Vancouver, ISO, and other styles
30

Vo, Dung Trung. "Spatio-temporal filtering for images and videos applications on quality enhancement, coding and data pruning /." Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p3355795.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2009.
Title from first page of PDF file (viewed June 25, 2009). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 113-118).
APA, Harvard, Vancouver, ISO, and other styles
31

Ko, Chia-Chieh, and 柯佳伽. "Color Image Quality Enhancement Using Retinex Algorithm." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/565fvu.

Full text
Abstract:
碩士
世新大學
資訊管理學研究所(含碩專班)
96
This study aims the visual image contrast enhancement of the color images. Amongst of them, Retinex is the most significant enhancement for human visual system, taking into account in both color component of the rendering and the image contrast enhanced that makes the choices. Because of the Retinex absence use images histogram information distribution, image histogram distribution in deriving Quartile Sigmoid Function (QSF) is proposed. By combining the QSF mapping and Retinex theory, the resulting has shown well in both the rending and contrast enhancement. Sets of images to be used for proposed images enhancement model. As for bands of information of Histogram equalization is employed, integration of both MSR and the Retinex with Quartile Sigmoid Function (QSF) demonstrates the effectiveness of the algorithm in terms of the image quality
APA, Harvard, Vancouver, ISO, and other styles
32

Chang, Ching-Yun, and 張慶雲. "Enhancement of Motion Image Quality in LCDs." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/74882538475102651955.

Full text
Abstract:
碩士
國立中興大學
電機工程學系所
94
Recently, TFT-LCDs have been widely applied by that have several advantages such as thickness and low power consumption. And the diversification has required the TFT-LCDs to be able to display not only the conventional still pictures, but also high-quality motion pictures. It is well known that motion images are blurred when they are displayed on TFT-LCDs. The motion blurred which results from slow response time and the inherent hold-type driving method used in LCDs. One technology might overcome the problem by using black frame insertion between the image frames. To obtain improved motion image quality in LCDs, it is necessary to apply image enhancement technology to TFT-LCDs. Dynamic gamma control can highly improve perceptual image quality such as contrast ratio, brightness. In this paper, we proposed a novel method for dynamic gamma correction by combining with RGB histogram, dynamic gamma control scheme, and black frame insertion. It was found R, G, B independent the better gamma value by analysis of the R, G, B "gray-level histogram” for each image. Based on the histogram, the "R, G, B Look up Table(LUT)" changes it''s internal synopsis and R, G, B independent gamma control. The dynamic gamma corrects for every image. In addition, the black image insertion without an extra frame memory was proposed to enhance quality of moving pictures. A conventional way with black image insertion is to insert a black image after every image showed. In our works, the TFT-LCD system showed the black image before every normal image showed. Combined the abovementioned technologies, we analyzed the R, G, B gray-level histogram and according to the results, the optimal gamma correction was performed and then the black image was inserted until the LCDs finished the image data transmission. The proposed TFT-LCD system not only reduces the cost of the hardware, but also achieves high quality still/moving pictures.
APA, Harvard, Vancouver, ISO, and other styles
33

Li, Yin-Chieh, and 李胤頡. "Image Quality Enhancement Technique Using Information Fusion." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/50432613414147934372.

Full text
Abstract:
碩士
大葉大學
資訊工程學系碩士班
102
In this thesis, we proposed a quality enhancement scheme for images of poor quality. Traditional algorithms might be able to enhance the image contrast, but possible over-enhancement also lead to bad overall visual quality. In fact, it is difficult to improve visual quality of images with under-exposure and over-exposure by using one single method. To simultaneously deal with images with under-exposure and over-exposure, we present a quality enhancement scheme based on information fusion. To deal with images with over-exposure, an enhancement algorithm based on dehazing technology was proposed. For under-exposure images, an enhancement algorithm based on exposure correction was developed. An information fusion algorithm is used to combine two resulting images and then obtain the final result. Experiment results demonstrate that the proposed scheme is able to enhance image details and keep the overall visual quality good as well. In addition, the proposed scheme can provide a better result compared with existing methods.
APA, Harvard, Vancouver, ISO, and other styles
34

謝萬法. "Image Quality Enhancement Using Gaussian Probability Distribution." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/58980307632617928168.

Full text
Abstract:
碩士
明新科技大學
電機工程研究所
96
Because of the progress in technology, digital technology has become the trend. Digital images in our daily life are indispensable information, which is why the image processing is so important. Digital cameras have become a frequently used tool in our life. However, everybody may not be able to achieve the satisfactory images every time, especially in the case of rayless area, so the improvement of image is very important. In this paper, we propose a simple and practical method to solve the over-exposure, lack of exposure and backlighting problems for a better image quality. We combine the histogram information and Gaussian probability distribution to find the contrast curve. Using this contrast curve, we modify the gray values of image to obtain the clear image. From the experimental results, we find this method can reach the satisfactory results.
APA, Harvard, Vancouver, ISO, and other styles
35

戴金發. "Image Quality Enhancement Using Improved Gaussian Probability Distribution." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/19847679552838109024.

Full text
Abstract:
碩士
明新科技大學
電機工程研究所碩士在職專班
100
Now,that Digital technology has become the trend.Digital images in our daily life are indispensable information, which is why the image processing is so important.but, Digital cameras have become a frequently used tool in ourlife. However, everybody may not be able to achieve the satisfactory images every time,especially in the case of rayless area, so the improvement of image is very important. In this paper, we propose a simple and practical method to solve theover-exposure, lack of exposure and backlighting problems for a better image quality.We find defect for combine the histogram information and Gaussian probability distribution to detect the contrast curve.so,this is not dichotomy to revise the Gaussian probability distribution. Using this new contrast curve, we modify the gray values of image to obtain the clear image. From the experimental results, we find this method can reach the satisfactory results.Image quality enhancement Using improved Gaussian probability distribution.this is a new method.
APA, Harvard, Vancouver, ISO, and other styles
36

林岳融. "Visual Secret Sharing with Reconstructed Image Quality Enhancement." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/98350283370758851224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Hsieh, Wen-ta, and 謝文達. "Practical Evaluation Model for Image Contrast Enhancement Quality." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/51257817660454076625.

Full text
Abstract:
碩士
逢甲大學
資訊電機工程碩士在職專班
99
The principle of image enhancement based on increasing the contrast between adjacent pixels, enabling viewers to visually perceive images with greater details in the textures and edges. Many contrast enhancement methods have been proposed to improve the quality of images and most of these methods are based histogram equalization (HE); however, the actual results remain uncertain due to the lack of an objective evaluation procedure with which to measure them. This paper proposes a quantitative analysis method for the assessment of image quality, named the practical image contrast enhancement quality index (PIQI), based on several subjective and objective evaluation metrics. This study used PIQI to evaluate various contrast enhancement methods, outlines the effects and discusses the implications.
APA, Harvard, Vancouver, ISO, and other styles
38

Hsu, Chih-Chung, and 許志仲. "Quality Enhancement and Assessment for Image and Video Resizing." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/53926838267243786242.

Full text
Abstract:
博士
國立清華大學
電機工程學系
102
This dissertation studies quality enhancement and assessment for image/video resizing. To achieve high-quality reconstruction of high-resolution (HR) details for a low-resolution (LR) image/video, super-resolution (SR) has proven to be an efficient approach. Particularly, learning-based SR schemes usually show superior performance, compared to conventional multi-frame SR approach. In part-I, we address three issues in learning-based image and video SR. The first task for real-world SR applications is to achieve simultaneous SR and deblocking for a highly compressed image. In our method, we propose to learn image sparse representations for modeling the relationship between low and high-resolution image patches in terms of the learned dictionaries for image patches with and without blocking artifacts, respectively. As a result, image SR and deblocking can be simultaneously achieved via sparse representation and MCA (morphological component analysis)-based dictionary classification. In this way, the learned dictionary can be successfully classified into two sub-dictionaries with and without blocking artifacts. Second, we propose a two-step face hallucination. Since the coefficients for representing a LR face image with LR dictionary is unreliable due to insufficient observed information, we propose a maximum-a-posterior (MAP) estimator to re-estimate the coefficients, which significantly improves the visual quality of the reconstructed face. Besides, the facial parts (i.e., eyes, nose and mouth) are further refined using the proposed basis selection method for overcomplete nonnegative matrix factorization (ONMF) dictionary to eliminate unnecessary information in basis. Third, we propose a texture-synthesis-based video SR method, in which a novel dynamic texture synthesis (DTS) scheme is proposed to render the reconstructed HR details in a temporally coherent way, which effectively addresses the temporal incoherence problem caused by traditional texture synthesis based image SR methods. To reduce the computational complexity, our method only performs the texture synthesis-based SR on a selected set of key-frames, while the HR details of the remaining non-key-frames are simply predicted using the bi-directional overlapped block motion compensation. After all frames are upscaled, the proposed DTS-SR is applied to maintain the temporal coherence in the HR video. The second part of this dissertation is quality assessment for image/video resizing techniques. Image/video retargeting algorithm has been comprehensively studied in past decade. However, there is no accurate objective quality assessment algorithm for image/video retargeting. We therefore propose a novel full-reference objective metric for automatically assessing visual quality of a retargeted image based on perceptual geometric distortion and information loss. The proposed metric measures the geometric distortion of retargeted images based on the local variance of SIFT flow vector fields. A visual saliency map is further derived to characterize human perception of the geometric distortion. Besides, the information loss in a retargeted image, which is estimated based on the saliency map, is also taken into account in the proposed metric. Furthermore, we extend the SIFT flow estimation to temporal domain for video retargeting quality assessment, in which the local temporal distortion can be measured by analyzing the local variance of the SIFT map vector fields. Experimental results demonstrate the proposed metrics for image and video retargeting significantly outperform existed state-of-the-art metrics.
APA, Harvard, Vancouver, ISO, and other styles
39

Lee, Chien-te, and 李建德. "CSTN LCD Frame Rate Controller For Image Quality Enhancement." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/08194971132122979072.

Full text
Abstract:
碩士
國立中山大學
電機工程學系研究所
98
This thesis is mainly focused on FRC (Frame Rate Control) method which can be used for LCD panels, where a new algorithm is proposed to improve the flicker problem. The proposed algorithm can be implemented by simple digital circuits with low power consumption. The proposed design can be applied in both mono- and color- STN panels. It can generate 32768 colors in a panel without any flicker and motion line problems, which can only allow 8 colors originally. The major contribution in this thesis is to add a location number to each pixel of the panel.Notably, the numbers for all the pixels can not be a regular pattern. Otherwise, the flicker problem is resolved at the expense of a serious motion line issue. The consequence is poor display quality. To resolve both the flicker and motion line problem, we propose to employ a PRSG (Pseudo Random Sequence Generator) which generates a non-regular number sequence for all the pixels. Therefore, all the ON pixels can be dispersed on the panel in all frames.
APA, Harvard, Vancouver, ISO, and other styles
40

Hsieh, Shau-Hung, and 謝韶紘. "A Study of Image Quality Enhancement and Color Correcting Compensation." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/00775357707626243042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Chiu, Chui-Wen, and 邱垂汶. "High Quality Spatial Resolution Enhancement using HHEF for Image and Video." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/48719532103524324149.

Full text
Abstract:
碩士
國立雲林科技大學
電機工程系碩士班
92
Images with higher resolution, able to display more details of the scene, are very desirable in many serious applications such as medical imaging, law-enforcement, satellite imaging, space probing etc. The image processing techniques that exceed the hardware limitation to increase the spatial resolution for images or videos are referred to as super-resolution (or image enlargement, or resolution enhancement) techniques. Most super resolution researches focus on the elimination of blurs around edges and texture areas after the interpolation, such that the image looks sharper. In this thesis, we propose super-resolution algorithms for both still images and videos. In the image super-resolution, we analyze and model the relationship between a high-resolution image (obtained by a higher sensor density) and the corresponding degraded low-resolution image (obtained by lower sensor density). Based on the degradation model, we propose a high frequency emphasis filter (HHEF) to restore the suppressed high frequency components in the image. We proceed to derive the Intensity Correction (IC) relationship in the degradation process as the constraint of HHEF gain. We also analyze and evaluate the performance and limitations of the proposed approach. Experiments on real images show that both edges and texture areas are enhanced significantly (perceptually and PSNR) by the proposed HHEF, while most other super-resolution methods enhance only the edges. In the video super-resolution, we adopt the multi-frame based approach, where previous and future low-resolution frames are used to estimate the increased pixels in the current high-resolution frame. We take the divide and conquer approach to analyze the effectiveness and limitations of each process in the multi-frame based video super-resolution algorithm and make improvements accordingly on the search strategy and match criterion. The HHEF and IC developed in still image super-resolution are apply to further Improve the image quality both perceptually and PSNR.
APA, Harvard, Vancouver, ISO, and other styles
42

Joshi, A., Amber J. Gislason-Lee, U. M. Sivananthan, and A. G. Davies. "Can image enhancement allow radiation dose to be reduced whilst maintaining the perceived diagnostic image quality required for coronary angiography?" 2017. http://hdl.handle.net/10454/16959.

Full text
Abstract:
Yes
Digital image processing used in modern cardiac interventional x-ray systems may have the potential to enhance image quality such that it allows for lower radiation doses. The aim of this research was to quantify the reduction in radiation dose facilitated by image processing alone for percutaneous coronary intervention (PCI) patient angiograms, without reducing the perceived image quality required to confidently make a diagnosis. Incremental amounts of image noise were added to five PCI patient angiograms, simulating the angiogram having been acquired at corresponding lower dose levels (by 10-89% dose reduction). Sixteen observers with relevant background and experience scored the image quality of these angiograms in three states - with no image processing and with two different modern image processing algorithms applied; these algorithms are used on state-of-the-art and previous generation cardiac interventional x-ray systems. Ordinal regression allowing for random effects and the delta method were used to quantify the dose reduction allowed for by the processing algorithms, for equivalent image quality scores. The dose reductions [with 95% confidence interval] from the state-of-the-art and previous generation image processing relative to no processing were 24.9% [18.8- 31.0%] and 15.6% [9.4-21.9%] respectively. The dose reduction enabled by the state-of-the-art image processing relative to previous generation processing was 10.3% [4.4-16.2%]. This demonstrates that statistically significant dose reduction can be facilitated with no loss in perceived image quality using modern image enhancement; the most recent processing algorithm was more effective in preserving image quality at lower doses.
The study was funded by Philips Healthcare (the Netherlands).
APA, Harvard, Vancouver, ISO, and other styles
43

Lin, Tsung-Yu, and 林宗瑜. "Image Quality and Light Efficiency Enhancement of Organic Light-Emitting Devices by Using Microstructure Attachment." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/82521291541002203369.

Full text
Abstract:
碩士
國立臺灣大學
光電工程學研究所
95
In this thesis, we set up a simulation model of organic light-emitting devices (OLED) with microstructure attachment. We demonstrated the simulation model and ray tracing analysis with the optical software LightTools TM. In order to improve the out coupling efficiency, we considered the total internal reflection (TIR) between air and substrate interface is a key place to be modified. We apply the micro-pyramid array to organic light-emitting devices (OLED) to reduce the total internal reflection (TIR) between air and substrate interface. Instead of merely concerning the efficiency for lighting purposes, we must take both the light extraction efficiency and image quality into account for display applications. In this simulation, we modulate some parameters such as base width, base angle, fill factor and arrangement of perfect micro-pyramid and truncated micro-pyramid, and try to find out the effects on angular intensity distribution, total power enhancement, and blurred image of the OLED. Finally we bring up several new microstructures and arrangement to control the blur effect and increase light extraction efficiency significantly.
APA, Harvard, Vancouver, ISO, and other styles
44

Lin, Tsung-Yu. "Image Quality and Light Efficiency Enhancement of Organic Light-Emitting Devices by Using Microstructure Attachment." 2007. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-2407200713442500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Leung, Erich Tak-Him. "Adaptive enhancement of cardiac magnetic resonance (MR) images /." 2005.

Find full text
Abstract:
Thesis (M.Sc.)--York University, 2005. Graduate Programme in Computer Science.
Typescript. Includes bibliographical references (leaves 186-196). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url%5Fver=Z39.88-2004&res%5Fdat=xri:pqdiss &rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR11837
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography