To see the other types of publications on this topic, follow the link: Fundus Image.

Journal articles on the topic 'Fundus Image'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Fundus Image.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wintergerst, Maximilian W. M., Linus G. Jansen, Frank G. Holz, and Robert P. Finger. "A Novel Device for Smartphone-Based Fundus Imaging and Documentation in Clinical Practice: Comparative Image Analysis Study." JMIR mHealth and uHealth 8, no. 7 (July 29, 2020): e17480. http://dx.doi.org/10.2196/17480.

Full text
Abstract:
Background Smartphone-based fundus imaging allows for mobile and inexpensive fundus examination with the potential to revolutionize eye care, particularly in lower-resource settings. However, most smartphone-based fundus imaging adapters convey image quality not comparable to conventional fundus imaging. Objective The purpose of this study was to evaluate a novel smartphone-based fundus imaging device for documentation of a variety of retinal/vitreous pathologies in a patient sample with wide refraction and age ranges. Methods Participants’ eyes were dilated and imaged with the iC2 funduscope (HEINE Optotechnik) using an Apple iPhone 6 in single-image acquisition (image resolution of 2448 × 3264 pixels) or video mode (1248 × 1664 pixels) and a subgroup of participants was also examined by conventional fundus imaging (Zeiss VISUCAM 500). Smartphone-based image quality was compared to conventional fundus imaging in terms of sharpness (focus), reflex artifacts, contrast, and illumination on semiquantitative scales. Results A total of 47 eyes from 32 participants (age: mean 62.3, SD 19.8 years; range 7-93; spherical equivalent: mean –0.78, SD 3.21 D; range: –7.88 to +7.0 D) were included in the study. Mean (SD) visual acuity (logMAR) was 0.48 (0.66; range 0-2.3); 30% (14/47) of the eyes were pseudophakic. Image quality was sufficient in all eyes irrespective of refraction. Images acquired with conventional fundus imaging were sharper and had less reflex artifacts, and there was no significant difference in contrast and illumination (P<.001, P=.03, and P=.10, respectively). When comparing image quality at the posterior pole, the mid periphery, and the far periphery, glare increased as images were acquired from a more peripheral part of the retina. Reflex artifacts were more frequent in pseudophakic eyes. Image acquisition was also possible in children. Documentation of deep optic nerve cups in video mode conveyed a mock 3D impression. Conclusions Image quality of conventional fundus imaging was superior to that of smartphone-based fundus imaging, although this novel smartphone-based fundus imaging device achieved image quality high enough to document various fundus pathologies including only subtle findings. High-quality smartphone-based fundus imaging might represent a mobile alternative for fundus documentation in clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
2

Dai, Peishan, Hanwei Sheng, Jianmei Zhang, Ling Li, Jing Wu, and Min Fan. "Retinal Fundus Image Enhancement Using the Normalized Convolution and Noise Removing." International Journal of Biomedical Imaging 2016 (2016): 1–12. http://dx.doi.org/10.1155/2016/5075612.

Full text
Abstract:
Retinal fundus image plays an important role in the diagnosis of retinal related diseases. The detailed information of the retinal fundus image such as small vessels, microaneurysms, and exudates may be in low contrast, and retinal image enhancement usually gives help to analyze diseases related to retinal fundus image. Current image enhancement methods may lead to artificial boundaries, abrupt changes in color levels, and the loss of image detail. In order to avoid these side effects, a new retinal fundus image enhancement method is proposed. First, the original retinal fundus image was processed by the normalized convolution algorithm with a domain transform to obtain an image with the basic information of the background. Then, the image with the basic information of the background was fused with the original retinal fundus image to obtain an enhanced fundus image. Lastly, the fused image was denoised by a two-stage denoising method including the fourth order PDEs and the relaxed median filter. The retinal image databases, including the DRIVE database, the STARE database, and the DIARETDB1 database, were used to evaluate image enhancement effects. The results show that the method can enhance the retinal fundus image prominently. And, different from some other fundus image enhancement methods, the proposed method can directly enhance color images.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhao, Wen Dong, You Dong Zhang, and Chun Xia Jin. "A New Method of Fundus Image Enhancement Based on Rough Set and Wavelet Transform." Applied Mechanics and Materials 397-400 (September 2013): 2205–8. http://dx.doi.org/10.4028/www.scientific.net/amm.397-400.2205.

Full text
Abstract:
Fundus images are complex images with more details, and on the basis of the inadequate fuzzy enhancement algorithm proposed by Pal et al, this article propose an improved algorithm of rough set for fundus image enhancement. The fundus image will be multi-scale decomposed by wavelet transform firstly, and then the subgraphs are enhanced by using rough set to improve the visual effects; finally the processed sub-images will be reconstructed and generated a new-enhanced image. Compared with the Pal algorithm, the new algorithm not only overcomes its weaknesses that the threshold is set with a fixed value, but also reduces the number of iterations. Experimental results show that the improved enhancement algorithm has a better effect on the fundus image enhancement, and the various details of fundus images can be shown better.
APA, Harvard, Vancouver, ISO, and other styles
4

Hernandez-Matas, Carlos, Xenophon Zabulis, Areti Triantafyllou, Panagiota Anyfanti, Stella Douma, and Antonis A. Argyros. "FIRE: Fundus Image Registration dataset." Modeling and Artificial Intelligence in Ophthalmology 1, no. 4 (July 7, 2017): 16–28. http://dx.doi.org/10.35119/maio.v1i4.42.

Full text
Abstract:
Purpose: Retinal image registration is a useful tool for medical professionals. However, performance evaluation of registration methods has not been consistently assessed in the literature. To address that, a dataset comprised of retinal image pairs annotated with ground truth and an evaluation protocol for registration methods is proposed.Methods: The dataset is comprised by 134 retinal fundus image pairs. These pairs are classified into three categories, according to characteristics that are relevant to indicative registration applications. Such characteristics are the degree of overlap between images and the presence/absence of anatomical differences. Ground truth in the form of corresponding image points and a protocol to evaluate registration performance are provided.Results: The proposed protocol is shown to enable quantitative and comparative evaluation of retinal registration methods under a variety of conditions.Conclusion: This work enables the fair comparison of retinal registration methods. It also helps researchers to select the registration method that is most appropriate given a specific target use.
APA, Harvard, Vancouver, ISO, and other styles
5

Ahn, Sangil, Quang T. M. Pham, Jitae Shin, and Su Jeong Song. "Future Image Synthesis for Diabetic Retinopathy Based on the Lesion Occurrence Probability." Electronics 10, no. 6 (March 19, 2021): 726. http://dx.doi.org/10.3390/electronics10060726.

Full text
Abstract:
Diabetic Retinopathy (DR) is one of the major causes of blindness. If the lesions observed in DR occur in the central part of the fundus, it can cause severe vision loss, and we call this symptom Diabetic Macular Edema (DME). All patients with DR potentially have DME since DME can occur in every stage of DR. While synthesizing future fundus images, the task of predicting the progression of the disease state is very challenging since we need a lot of longitudinal data over a long period of time. Even if the longitudinal data are collected, there is a pixel-level difference between the current fundus image and the target future image. It is difficult to train a model based on deep learning for synthesizing future fundus images that considers the lesion change. In this paper, we synthesize future fundus images by considering the progression of the disease with a two-step training approach to overcome these problems. In the first step, we concentrate on synthesizing a realistic fundus image using only a lesion segmentation mask and vessel segmentation mask from a large dataset for a fundus generator. In the second step, we train a lesion probability predictor to create a probability map that contains the occurrence probability information of the lesion. Finally, based on the probability map and current vessel, the pre-trained fundus generator synthesizes a predicted future fundus image. We visually demonstrate not only the capacity of the fundus generator that can control the pathological information but also the prediction of the disease progression on fundus images generated by our framework. Our framework achieves an F1-score of 0.74 for predicting DR severity and 0.91 for predicting DME occurrence. We demonstrate that our framework has a meaningful capability by comparing the scores of each class of DR severity, which are obtained by passing the predicted future image and real future image through an evaluation model.
APA, Harvard, Vancouver, ISO, and other styles
6

Firdaus Ahmad Fadzil, Ahmad, Zaaba Ahmad, Noor Elaiza Abd Khalid, and Shafaf Ibrahim. "Retinal Fundus Image Blood Vessels Segmentation via Object-Oriented Metadata Structures." International Journal of Engineering & Technology 7, no. 4.33 (December 9, 2018): 110. http://dx.doi.org/10.14419/ijet.v7i4.33.23511.

Full text
Abstract:
Retinal fundus image is a crucial tool for ophthalmologists to diagnose eye-related diseases. These images provide visual information of the interior layer of the retina structures such as optic disc, optic cup, blood vessels and macula that can assist ophthalmologist in determining the health of an eye. Segmentation of blood vessels in fundus images is one of the most fundamental phase in detecting diseases such as diabetic retinopathy. However, the ambiguity of the retina structures in the retinal fundus images presents a challenge for researcher to segment the blood vessels. Extensive pre-processing and training of the images is necessary for precise segmentation, which is very intricate and laborious. This paper proposes the implementation of object-oriented-based metadata (OOM) structures of each pixel in the retinal fundus images. These structures comprise of additional metadata towards the conventional red, green, and blue data for each pixel within the images. The segmentation of the blood vessels in the retinal fundus images are performed by considering these additional metadata that enunciates the location, color spaces, and neighboring pixels of each individual pixel. From the results, it is shown that accurate segmentation of retinal fundus blood vessels can be achieved by purely employing straightforward thresholding method via the OOM structures without extensive pre-processing image processing technique or data training.
APA, Harvard, Vancouver, ISO, and other styles
7

Prastyo, Pulung Hendro, Amin Siddiq Sumi, and Annis Nuraini. "Optic Cup Segmentation using U-Net Architecture on Retinal Fundus Image." JITCE (Journal of Information Technology and Computer Engineering) 4, no. 02 (September 30, 2020): 105–9. http://dx.doi.org/10.25077/jitce.4.02.105-109.2020.

Full text
Abstract:
Retinal fundus images are used by ophthalmologists to diagnose eye disease, such as glaucoma disease. The diagnosis of glaucoma is done by measuring changes in the cup-to-disc ratio. Segmenting the optic cup helps petrify ophthalmologists calculate the CDR of the retinal fundus image. This study proposed a deep learning approach using U-Net architecture to carry out segmentation task. This proposed method was evaluated on 650 color retinal fundus image. Then, U-Net was configured using 160 epochs, image input size = 128x128, Batch size = 32, optimizer = Adam, and loss function = Binary Cross Entropy. We employed the Dice Coefficient as the evaluator. Besides, the segmentation results were compared to the ground truth images. According to the experimental results, the performance of optic cup segmentation achieved 98.42% for the Dice coefficient and loss of 1,58%. These results implied that our proposed method succeeded in segmenting the optic cup on color retinal fundus images.
APA, Harvard, Vancouver, ISO, and other styles
8

Kaur, Kiranjit, and Priyadarshni. "Retinal Fundus Detection Using Skew Symmetric Matrix." International Journal of Advanced Research in Computer Science and Software Engineering 7, no. 7 (July 29, 2017): 103. http://dx.doi.org/10.23956/ijarcsse.v7i7.107.

Full text
Abstract:
The retina is the light sensitive tissue, lining the back of our eye. Light rays are focused onto the retina through our cornea, pupil and lens. The retina converts the light rays into impulses that travel through optic nerve to our brain, where they are interpreted as the images. The task of manually segmenting fundus from retina images is generally time-consuming and difficult. In most settings, the task is done by marking the fundus regions slice-by-slice, which limits the human rater’s view and generates distorted images. Manual segmentation is also typically done largely based on a single image with intensity enhancement provided by an injected contrast agent. In the current research the fundus is detected and extracted in retinal image. Fundus is distinguished from normal tissues by their image intensity, threshold-based or region growing techniques. The fundus in this approach is detected with the help of geometric features. Skew symmetric matrix is used to avoid any angular orientation. In this approach the accuracy on fundus is quite promising. Accuracy of fundus detection is improved according to the area and the acceptance rate .In this approach ,once the image is loaded, it is filtered and normalized. Then superpixels are generated using linear iterative clustering approach and the features are generated. From the available set of features, some of the features are selected using sequential forward selection approach .Classifier is constructed in order to determine different classes in a test image. Proposed work is two class problem in which algorithm is applied that consists of skew symmetric matrix .Experimental results show substantial improvements in the accuracy and the performance of fundus detection as well as in false acceptance rate and false rejection rate.
APA, Harvard, Vancouver, ISO, and other styles
9

Ward, Nicholas P., Stephen Tomliivson, and Christopher J. Taylor. "Image Analysis of Fundus Photographs." Ophthalmology 96, no. 1 (January 1989): 80–86. http://dx.doi.org/10.1016/s0161-6420(89)32925-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Qureshi, Imran, Jun Ma, and Kashif Shaheed. "A Hybrid Proposed Fundus Image Enhancement Framework for Diabetic Retinopathy." Algorithms 12, no. 1 (January 4, 2019): 14. http://dx.doi.org/10.3390/a12010014.

Full text
Abstract:
Diabetic retinopathy (DR) is a complication of diabetes and is known as visual impairment, and is diagnosed in various ethnicities of the working-age population worldwide. Fundus angiography is a widely applicable modality used by ophthalmologists and computerized applications to detect DR-based clinical features such as microaneurysms (MAs), hemorrhages (HEMs), and exudates (EXs) for early screening of DR. Fundus images are usually acquired using funduscopic cameras in varied light conditions and angles. Therefore, these images are prone to non-uniform illumination, poor contrast, transmission error, low brightness, and noise problems. This paper presents a novel and real-time mechanism of fundus image enhancement used for early grading of diabetic retinopathy, macular degeneration, retinal neoplasms, and choroid disruptions. The proposed system is based on two folds: (i) An RGB fundus image is initially taken and converted into a color appearance module (called lightness and denoted as J) of the CIECAM02 color space model to obtain image information in grayscale with bright light. Afterwards, in step (ii), the achieved J component is processed using a nonlinear contrast enhancement approach to improve the textural and color features of the fundus image without any further extraction steps. To test and evaluate the strength of the proposed technique, several performance and quality parameters—namely peak signal-to-noise ratio (PSNR), contrast-to-noise ratio (CNR), entropy (content information), histograms (intensity variation), and a structure similarity index measure (SSIM)—were applied to 1240 fundus images comprised of two publicly available datasets, DRIVE and MESSIDOR. It was determined from the experiments that the proposed enhancement procedure outperformed histogram-based approaches in terms of contrast, sharpness of fundus features, and brightness. This further revealed that it can be a suitable preprocessing tool for segmentation and classification of DR-related features algorithms.
APA, Harvard, Vancouver, ISO, and other styles
11

Guo, Jifeng, Zhiqi Pang, Fan Yang, Jiayou Shen, and Jian Zhang. "Study on the Method of Fundus Image Generation Based on Improved GAN." Mathematical Problems in Engineering 2020 (July 8, 2020): 1–13. http://dx.doi.org/10.1155/2020/6309596.

Full text
Abstract:
With the continuous development of deep learning, the performance of the intelligent diagnosis system for ocular fundus diseases has been significantly improved, but during the system training process, problems like lack of fundus samples and uneven sample distribution (the number of disease samples is much smaller than the number of normal samples) have become increasingly prominent. In view of the previous issues, this paper proposes a method for generating fundus images based on “Combined GAN” (Com-GAN), which can generate both normal fundus images and fundus images with hard exudates, so that the sample distribution can be more even, while the fundus data are expanded. First, this paper uses existing images to train a Com-GAN, which consists of two subnetworks: im-WGAN and im-CGAN; then, it uses the trained model to generate fundus images, then performs qualitative and quantitative evaluation on the generated images, and adds the images to the original image set to expand the datasets; finally, based on this expanded training set, it trains the hard exudate detection system. The expanded datasets effectively improve the generalization ability of the system on the public datasets DIARETDB1 and e-ophtha EX, thereby verifying the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
12

Lavanya, R., G. K. Rajini, and G. Vidhya Sagar. "Retinal vessel feature extraction from fundus image using image processing techniques." International Journal of Engineering & Technology 7, no. 2 (May 8, 2018): 687. http://dx.doi.org/10.14419/ijet.v7i2.8892.

Full text
Abstract:
Retinal Vessel detection for retinal images play crucial role in medical field for proper diagnosis and treatment of various diseases like diabetic retinopathy, hypertensive retinopathy etc. This paper deals with image processing techniques for automatic analysis of blood vessel detection of fundus retinal image using MATLAB tool. This approach uses intensity information and local phase based enhancement filter techniques and morphological operators to provide better accuracy.Objective: The effect of diabetes on the eye is called Diabetic Retinopathy. At the early stages of the disease, blood vessels in the retina become weakened and leak, forming small hemorrhages. As the disease progress, blood vessels may block, and sometimes leads to permanent vision loss. To help Clinicians in diagnosis of diabetic retinopathy in retinal images with an early detection of abnormalities with automated tools.Methods: Fundus photography is an imaging technology used to capture retinal images in diabetic patient through fundus camera. Adaptive Thresholding is used as pre-processing techniques to increase the contrast, and filters are applied to enhance the image quality. Morphological processing is used to detect the shape of blood vessels as they are nonlinear in nature.Results: Image features like, Mean and Standard deviation and entropy, for textural analysis of image with Gray Level Co-occurrence Matrix features like contrast and Energy are calculated for detected vessels.Conclusion: In diabetic patients eyes are affected severely compared to other organs. Early detection of vessel structure in retinal images with computer assisted tools may assist Clinicians for proper diagnosis and pathology.
APA, Harvard, Vancouver, ISO, and other styles
13

Ramasubramanian, B., and S. Selvaperumal. "A Novel Efficient Approach for the Screening of New Abnormal Blood Vessels in Color Fundus Images." Applied Mechanics and Materials 573 (June 2014): 808–13. http://dx.doi.org/10.4028/www.scientific.net/amm.573.808.

Full text
Abstract:
Reliable detection of abnormal vessels in color fundus image is still a great issue in medical image processing. An Efficient and robust approach for automatic detection of abnormal blood vessels in digital color fundus images is presented in this paper. First, the fundus images are preprocessed by applying a 3x3 median filter. Then, the images are segmented using a novel morphological operation. To classify these segmented image into normal and abnormal, seven features based on shape, contrast, position and density are extracted. Finally, these features are classified using a non-linear Support Vector Machine (SVM) Classifier. The average computation time for blood vessel detection was less than 2.4sec with a success rate of 99%. The performance of our proposed method is measured on publically available DRIVE and STARE database.
APA, Harvard, Vancouver, ISO, and other styles
14

Pao, Shu-I., Hong-Zin Lin, Ke-Hung Chien, Ming-Cheng Tai, Jiann-Torng Chen, and Gen-Min Lin. "Detection of Diabetic Retinopathy Using Bichannel Convolutional Neural Network." Journal of Ophthalmology 2020 (June 20, 2020): 1–7. http://dx.doi.org/10.1155/2020/9139713.

Full text
Abstract:
Deep learning of fundus photograph has emerged as a practical and cost-effective technique for automatic screening and diagnosis of severer diabetic retinopathy (DR). The entropy image of luminance of fundus photograph has been demonstrated to increase the detection performance for referable DR using a convolutional neural network- (CNN-) based system. In this paper, the entropy image computed by using the green component of fundus photograph is proposed. In addition, image enhancement by unsharp masking (UM) is utilized for preprocessing before calculating the entropy images. The bichannel CNN incorporating the features of both the entropy images of the gray level and the green component preprocessed by UM is also proposed to improve the detection performance of referable DR by deep learning.
APA, Harvard, Vancouver, ISO, and other styles
15

Sundaram, Ramakrishnan, Ravichandran KS, Premaladha Jayaraman, and Venkatraman B. "Extraction of Blood Vessels in Fundus Images of Retina through Hybrid Segmentation Approach." Mathematics 7, no. 2 (February 13, 2019): 169. http://dx.doi.org/10.3390/math7020169.

Full text
Abstract:
A hybrid segmentation algorithm is proposed is this paper to extract the blood vesselsfrom the fundus image of retina. Fundus camera captures the posterior surface of the eye and thecaptured images are used to diagnose diseases, like Diabetic Retinopathy, Retinoblastoma, Retinalhaemorrhage, etc. Segmentation or extraction of blood vessels is highly required, since the analysisof vessels is crucial for diagnosis, treatment planning, and execution of clinical outcomes in the fieldof ophthalmology. It is derived from the literature review that no unique segmentation algorithm issuitable for images of different eye-related diseases and the degradation of the vessels differ frompatient to patient. If the blood vessels are extracted from the fundus images, it will make thediagnosis process easier. Hence, this paper aims to frame a hybrid segmentation algorithmexclusively for the extraction of blood vessels from the fundus image. The proposed algorithm ishybridized with morphological operations, bottom hat transform, multi-scale vessel enhancement(MSVE) algorithm, and image fusion. After execution of the proposed segmentation algorithm, thearea-based morphological operator is applied to highlight the blood vessels. To validate theproposed algorithm, the results are compared with the ground truth of the High-Resolution Fundus(HRF) images dataset. Upon comparison, it is inferred that the proposed algorithm segments theblood vessels with more accuracy than the existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
16

Kaur, Jaskirat, and Deepti Mittal. "Construction of benchmark retinal image database for diabetic retinopathy analysis." Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 234, no. 9 (July 1, 2020): 1036–48. http://dx.doi.org/10.1177/0954411920938569.

Full text
Abstract:
Diabetic retinopathy, a symptomless medical condition of diabetes, is one of the significant reasons of vision impairment all over the world. The prior detection and diagnosis can decrease the occurrence of acute vision loss and enhance efficiency of treatment. Fundus imaging, a non-invasive diagnostic technique, is the most frequently used mode for analyzing retinal abnormalities related to diabetic retinopathy. Computer-aided methods based on retinal fundus images support quick diagnosis, impart an additional perspective during decision-making, and behave as an efficient means to assess response of treatment on retinal abnormalities. However, in order to evaluate computer-aided systems, a benchmark database of clinical retinal fundus images is required. Therefore, a representative database comprising of 2942 clinical retinal fundus images is developed and presented in this work. This clinical database, having varying attributes such as position, dimensions, shapes, and color, is formed to evaluate the generalization capability of computer-aided systems for diabetic retinopathy diagnosis. A framework for the development of benchmark retinal fundus images database is also proposed. The developed database comprises of medical image annotations for each image from expert ophthalmologists corresponding to anatomical structures, retinal lesions and stage of diabetic retinopathy. In addition, the substantial performance comparison capability of the proposed database aids in analyzing candidature of different methods, and subsequently its usage in medical practice for real-time applications.
APA, Harvard, Vancouver, ISO, and other styles
17

Garg, Meenu, and Sheifali Gupta. "Extraction of Vasculature Map of Color Retinal Fundus Image." Journal of Computational and Theoretical Nanoscience 16, no. 10 (October 1, 2019): 4188–201. http://dx.doi.org/10.1166/jctn.2019.8500.

Full text
Abstract:
This paper presents an unsupervised and novel method for extraction of vasculature map from colored retinal fundus images. Proposed technique makes use of a fusion of bimodal masking and global thresholding technique for extraction of vessels. For this, adaptive histogram equalization method is used for enhancement of the retinal input fundus images while, on the other hand, an average filter is used on the masked images to remove the noise from the image. At this stage, bimodal masking is used to generate the masked image to exclude the pixels that belong to the background. The use of this technique reduces the analysis time and computational effort as operations will be focused only on the object pixels. After that, a global thresholding technique is utilized on the masked foreground image, which produces a vasculature map with border. Since the border has no concern in our system, hence, it is removed using the mask generated through bimodal masking. Fundus images of DRIVE and STARE database is used to perform extensive computations. The results are encouraging as the proposed system shows better outcomes on 3-quality measures: average sensitivity, specificity, and accuracy which comes out to be (84.18%, 96.68% and 95.68%) and (81.79%, 90.74% and 90.08%) for DRIVE and STARE database respectively. The results prove that the proposed methodology is capable of extracting the vasculature map accurately, which can be further helpful for diagnosis, screening, and treatment of various disorders of fundus images.
APA, Harvard, Vancouver, ISO, and other styles
18

Madjarov, B. D. "Automated, real time extraction of fundus images from slit lamp fundus biomicroscope video image sequences." British Journal of Ophthalmology 84, no. 6 (June 1, 2000): 645–47. http://dx.doi.org/10.1136/bjo.84.6.645.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Balkys, Gediminas, and Gintautas Dzemyda. "SEGMENTING THE EYE FUNDUS IMAGES FOR IDENTIFICATION OF BLOOD VESSELS." Mathematical Modelling and Analysis 17, no. 1 (February 1, 2012): 21–30. http://dx.doi.org/10.3846/13926292.2012.644046.

Full text
Abstract:
Retinal (eye fundus) images are widely used for diagnostic purposes by ophthalmologists. The normal features of eye fundus images include the optic nerve disc, fovea and blood vessels. Algorithms for identifying blood vessels in the eye fundus image generally fall into two classes: extraction of vessel information and segmentation of vessel pixels. Algorithms of the first group start on known vessel point and trace the vasculature structure in the image. Algorithms of the second group perform a binary classification (vessel or non-vessel, i.e. background) in accordance of some threshold. We focus here on the binarization [4] methods that adapt the threshold value on each pixel to the global/local image characteristics. Global binarization methods [5] try to find a single threshold value for the whole image. Local binarization methods [3] compute thresholds individually for each pixel using information from the local neighborhood of the pixel. In this paper, we modify and improve the Sauvola local binarization method [3] by extending its abilities to be applied for eye fundus pictures analysis. This method has been adopted for automatic detection of blood vessels in retinal images. We suggest automatic parameter selection for Sauvola method. Our modification allows determine/extract the blood vessels almost independently of the brightness of the picture.
APA, Harvard, Vancouver, ISO, and other styles
20

Bhatkalkar, Bhargav, Abhishek Joshi, Srikanth Prabhu, and Sulatha Bhandary. "Automated fundus image quality assessment and segmentation of optic disc using convolutional neural networks." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 1 (February 1, 2020): 816. http://dx.doi.org/10.11591/ijece.v10i1.pp816-827.

Full text
Abstract:
An automated fundus image analysis is used as a tool for the diagnosis of common retinal diseases. A good quality fundus image results in better diagnosis and hence discarding the degraded fundus images at the time of screening itself provides an opportunity to retake the adequate fundus photographs, which save both time and resources. In this paper, we propose a novel fundus image quality assessment (IQA) model using the convolutional neural network (CNN) based on the quality of optic disc (OD) visibility. We localize the OD by transfer learning with Inception v-3 model. Precise segmentation of OD is done using the GrabCut algorithm. Contour operations are applied to the segmented OD to approximate it to the nearest circle for finding its center and diameter. For training the model, we are using the publicly available fundus databases and a private hospital database. We have attained excellent classification accuracy for fundus IQA on DRIVE, CHASE-DB, and HRF databases. For the OD segmentation, we have experimented our method on DRINS-DB, DRISHTI-GS, and RIM-ONE v.3 databases and compared the results with existing state-of-the-art methods. Our proposed method outperforms existing methods for OD segmentation on Jaccard index and F-score metrics.
APA, Harvard, Vancouver, ISO, and other styles
21

Lamminen, Heikki. "Picture archiving and fundus imaging in a glaucoma clinic." Journal of Telemedicine and Telecare 9, no. 2 (April 1, 2003): 114–16. http://dx.doi.org/10.1258/135763303321327993.

Full text
Abstract:
Ophthalmological image archiving and distribution can be automated using a picture archiving and communication system (PACS). A fundus PACS has been in clinical use since February 2000 at the ophthalmology clinic of Tampere University Hospital. It consists of a digital fundus camera, an imaging workstation, from which new patients can be added to the archive, 10 viewing stations and an image archive server. In glaucoma imaging, the fundus images taken from a patient are transferred from the imaging workstation to the image archive server and are then immediately available from the physician's viewing workstation; the transfer time of an average image, of 350 kbit, is 0.0035 s, even though the archive is located 5 km away. After 18 months of operation there were over 16,000 images archived; these took 5.3 GByte of a total storage capacity of 41.9 GByte. The network and archive server achieved 99% reliability in use. Digital imaging makes it possible to shift ophthalmology clinics towards more patient-oriented treatment procedures.
APA, Harvard, Vancouver, ISO, and other styles
22

Kulikov, A. N., D. S. Maltsev, M. A. Burnasheva, V. V. Volkov, V. F. Danilichev, and R. L. Troyanovskiy. "Wide-Field Imaging with NAVILAS Laser System." Ophthalmology in Russia 16, no. 2 (June 30, 2019): 210–17. http://dx.doi.org/10.18008/1816-5095-2019-2-210-217.

Full text
Abstract:
Purpose: to study the potential of wide-field imaging with NAVILAS laser system.Material and methods. In this study we included patients diagnosed with indirect ophthalmoscopy as having one of the follows: diabetic retinopathy (6 eyes), central retinal vein occlusion (5 eyes), choroidal melanoma (3 eyes), rhegmatogenous retinal detachment (4 eyes), and peripheral chorioretinal degeneration (10 eyes). Using NAVILAS 532 laser system and a wide-field contact lens (HR Wide Field (VOLK)) a wide-field central image and a panoramic (consisted of 4 to 6 images) images were obtained in all patients. Fundus images were evaluated according to their diagnostic value versus indirect ophthalmoscopy and wideness of the viewing angle versus standard color fundus photography (55°). In each patient within a single session were obtained: 1) a central fundus image and 2) panoramic image (4-field and in dynamic mode). In a subgroup of patients with central retinal vein occlusion and lattice retinal degeneration, we studied the ability of simultaneous laser photocoagulation wide-field imaging.Results. A single field images obtained with NAVILAS allows to visualize up to 130.3 ± 9.6° of the eye fundus while four-field and dynamic acquisition up to 150.1 ± 8.9° and 171.3 ± 17.0°, respectively. Representative findings of diabetic retinopathy, central retinal vein occlusion, choroidal melanoma, rhegmatogenous retinal detachment, and peripheral lattice degeneration were identified in all cases. Insufficient visualization was found for “snail track” degeneration because the subtle retina and choroid changes were hardly seen on the low magnified image. In 4 patients with lattice retinal degeneration and 3 patients with central retinal vein occlusion within a single session, both wide-field imaging and laser photocoagulation were performed. Surgical goals were achieved in all cases.Conclusion. Wide-field imaging with NAVILAS laser system demonstrated high potential in the documentation of the most widely spread eye fundus disease the and represents an adequate alternative for wide-field fundus cameras. Aside from wide-field imaging this approach also allows for simultaneous laser photocoagulation in entire eye fundus including far peripheral retina.
APA, Harvard, Vancouver, ISO, and other styles
23

Malviya, Richa. "Retinal Fundus Image Enhancement and Segmentation." Journal of Medical Imaging and Health Informatics 3, no. 4 (December 1, 2013): 568–74. http://dx.doi.org/10.1166/jmihi.2013.1206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Sadarajupalli, Krishnaveni, and Sudhakar Putheti. "Rough Texton based Fundus Image Retrieval." International Journal of Computer Applications 132, no. 15 (December 17, 2015): 19–25. http://dx.doi.org/10.5120/ijca2015907663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Nath, Malaya Kumar, and Samarendra Dandapat. "Multiscale ICA for fundus image analysis." International Journal of Imaging Systems and Technology 23, no. 4 (November 13, 2013): 327–37. http://dx.doi.org/10.1002/ima.22067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Kim, Jooyoung, Sojung Go, Kyoung Jin Noh, Sang Jun Park, and Soochahn Lee. "Fully Leveraging Deep Learning Methods for Constructing Retinal Fundus Photomontages." Applied Sciences 11, no. 4 (February 16, 2021): 1754. http://dx.doi.org/10.3390/app11041754.

Full text
Abstract:
Retinal photomontages, which are constructed by aligning and integrating multiple fundus images, are useful in diagnosing retinal diseases affecting peripheral retina. We present a novel framework for constructing retinal photomontages that fully leverage recent deep learning methods. Deep learning based object detection is used to define the order of image registration and blending. Deep learning based vessel segmentation is used to enhance image texture to improve registration performance within a two step image registration framework comprising rigid and non-rigid registration. Experimental evaluation demonstrates the robustness of our montage construction method with an increased amount of successfully integrated images as well as reduction of image artifacts.
APA, Harvard, Vancouver, ISO, and other styles
27

Somasundaram, K., and P. Alli Rajendran. "Diagnosing and Ranking Retinopathy Disease Level Using Diabetic Fundus Image Recuperation Approach." Scientific World Journal 2015 (2015): 1–8. http://dx.doi.org/10.1155/2015/534045.

Full text
Abstract:
Retinal fundus images are widely used in diagnosing different types of eye diseases. The existing methods such as Feature Based Macular Edema Detection (FMED) and Optimally Adjusted Morphological Operator (OAMO) effectively detected the presence of exudation in fundus images and identified the true positive ratio of exudates detection, respectively. These mechanically detected exudates did not include more detailed feature selection technique to the system for detection of diabetic retinopathy. To categorize the exudates, Diabetic Fundus Image Recuperation (DFIR) method based on sliding window approach is developed in this work to select the features of optic cup in digital retinal fundus images. The DFIR feature selection uses collection of sliding windows with varying range to obtain the features based on the histogram value using Group Sparsity Nonoverlapping Function. Using support vector model in the second phase, the DFIR method based on Spiral Basis Function effectively ranks the diabetic retinopathy disease level. The ranking of disease level on each candidate set provides a much promising result for developing practically automated and assisted diabetic retinopathy diagnosis system. Experimental work on digital fundus images using the DFIR method performs research on the factors such as sensitivity, ranking efficiency, and feature selection time.
APA, Harvard, Vancouver, ISO, and other styles
28

Tadasare, S. S., and S. S. Pawar. "Early Detection of High Blood Pressure and Diabetic Retinopathy on Retinal Fundus Images Using CBRIR Based on Lifting Wavelets." International Journal of Advances in Applied Sciences 7, no. 4 (December 1, 2018): 334. http://dx.doi.org/10.11591/ijaas.v7.i4.pp334-346.

Full text
Abstract:
In this paper we present a lifting wavelet based CBRIR image retrieval system that uses color and texture as visual features to describe the content of a retinal fundus images. Our contribution is of three directions. First, we use lifting wavelets 9/7 for lossy and SPL5/3 for lossless to extract texture features from arbitrary shaped retinal fundus regions separated from an image to increase the system effectiveness. This process is performed offline before query processing, therefore to answer a query our system does not need to search the entire database images; instead just a number of similar class type patient images are required to be searched for image similarity. Third, to further increase the retrieval accuracy of our system, we combine the region based features extracted from image regions, with global features extracted from the whole image, which are texture using lifting wavelet and HSV color histograms. Our proposed system has the advantage of increasing the retrieval accuracy and decreasing the retrieval time. The experimental evaluation of the system is based on a db1 online retinal fundus color image database. From the experimental results, it is evident that our system performs significantly better accuracy as compared with traditional wavelet based systems. In our simulation analysis, we provide a comparison between retrieval results based on features extracted from the whole image using lossless 5/3 lifting wavelet and features extracted using lossless 9/7 lifting wavelet and using traditional wavelet. The results demonstrate that each type of feature is effective for a particular type of disease of retinal fundus images according to its semantic contents, and using lossless 5/3 lifting wavelet of them gives better retrieval results for almost all semantic classes and outperform 4-10% more accuracy than traditional wavelet.
APA, Harvard, Vancouver, ISO, and other styles
29

Terasaki, Hiroto, Shozo Sonoda, Masatoshi Tomita, and Taiji Sakamoto. "Recent Advances and Clinical Application of Color Scanning Laser Ophthalmoscope." Journal of Clinical Medicine 10, no. 4 (February 11, 2021): 718. http://dx.doi.org/10.3390/jcm10040718.

Full text
Abstract:
Scanning laser ophthalmoscopes (SLOs) have been available since the early 1990s, but they were not commonly used because their advantages were not enough to replace conventional color fundus photography. In recent years, color SLOs have improved significantly, and the colored SLO images are obtained by combining multiple SLO images taken by lasers of different wavelengths. A combination of these images of different lasers can create an image that is close to that of the real ocular fundus. One advantage of the advanced SLOs is that they can obtain images with a wider view of the ocular fundus while maintaining a high resolution even through non-dilated eyes. The current SLOs are superior to the conventional fundus photography in their ability to image abnormal alterations of the retina and choroid. Thus, the purpose of this review was to present the characteristics of the current color SLOs and to show how that can help in the diagnosis and the following of changes after treatments. To accomplish these goals, we will present our findings in patients with different types of retinochoroidal disorders.
APA, Harvard, Vancouver, ISO, and other styles
30

Sahana, H., Dr Archana Nandibewor, Dr Aijazahamed Qazi, and Dr Pushpalatha S. Nikkam. "Retinal Image Processing on Early Glaucoma Detection." Alinteri Journal of Agricultural Sciences 36, no. 1 (April 8, 2021): 138–41. http://dx.doi.org/10.47059/alinteri/v36i1/ajas21020.

Full text
Abstract:
Glaucoma is a habitual eye disorder which harms eye’s second cranial nerve. There are millions of second cranial nerves. The main function of these types of nerves is to sending captured visual information from retina to the brain. The escalate pressure in the human eye leads to Glaucoma. This heavy pressure is known as intraocular pressure. This heavy pressure leads to damage eye's optic nerve head and retina continuously further it tends to vision loss. In this paper there are two datasets including both normal person and affected person's eye color images. The principal aim of this project is to compare the color of the eye with these two datasets. A special camera which is attached to less power microscope is called fundus camera or retinal camera. The images captured by this type of fundus camera is called fundus picture[1]. It is a high dimensional laser image. MATLAB software tool is used to fulfill the feature extraction of these fundus images. A color pixel in the affected area of the person is measured to check whether a person is Glaucomatous or not. If the final result is positive then it is Glaucoma.
APA, Harvard, Vancouver, ISO, and other styles
31

Bhargavi, V. Ratna, and V. Rajesh. "Computer Aided Bright Lesion Classification in Fundus Image Based on Feature Extraction." International Journal of Pattern Recognition and Artificial Intelligence 32, no. 11 (July 24, 2018): 1850034. http://dx.doi.org/10.1142/s0218001418500349.

Full text
Abstract:
In this paper, a hybrid approach of fundus image classification for diabetic retinopathy (DR) lesions is proposed. Laplacian eigenmaps (LE), a nonlinear dimensionality reduction (NDR) technique is applied to a high-dimensional scale invariant feature transform (SIFT) representation of fundus image for lesion classification. The applied NDR technique gives a low-dimensional intrinsic feature vector for lesion classification in fundus images. The publicly available databases are used for demonstrating the implemented strategy. The performance of applied technique can be evaluated based on sensitivity, specificity and accuracy using Support vector classifier. Compared to other feature vectors, the implemented LE-based feature vector yielded better classification performance. The accuracy obtained is 96.6% for SIFT-LE-SVM.
APA, Harvard, Vancouver, ISO, and other styles
32

Skokan, M., L. Kubečka, M. Wolf, K. Donath, J. Jan, G. Michelson, H. Niemann, and R. Chrástek. "Multimodal Retinal Image Registration for Optic Disk Segmentation." Methods of Information in Medicine 43, no. 04 (2004): 336–42. http://dx.doi.org/10.1055/s-0038-1633888.

Full text
Abstract:
Summary Objectives: The analysis of the optic disk morphology with the means of the scanning laser tomography is an important step for glaucoma diagnosis. A method we developed for optic disk segmentation in images of the scanning laser tomograph is limited by noise, nonuniform illumination and presence of blood vessels. Inspired by recent medical research, we wanted to develop a tool for improving optic disk segmentation by registration of images of the scanning laser tomograph and color fundus photographs and by applying a method we developed for optic disk segmentation in color fundus photographs. Methods: The segmentation of the optic disk for glaucoma diagnosis in images of the scanning laser tomograph is based on morphological operations, detection of anatomical structures and active contours and has been described in a previous paper [1]. The segmentation of the optic disk in the fundus photographs is based on nonlinear filtering, Canny edge detector and a modified Hough transform. The registration is based on mutual information using simulated annealing for finding maxima. Results: The registration was successful 86.8% of the time when tested on 174 images. Results of the registration have shown a very low displacement error of a maximum of about 5 pixels. The correctness of the registration was manually evaluated by measuring distances between the real vessel borders and those from the registered image. Conclusions: We have developed a method for the registration of images of the scanning laser tomograph and fundus photographs. Our first experiments showed that the optic disk segmentation could be improved by fused information from both image modalities.
APA, Harvard, Vancouver, ISO, and other styles
33

Yang, Chun Lan, Ye Yuan, Bing Liu, Yan Qing Xue, and Shui Cai Wu. "Segmentation on the Key Structures of the Fundus Digital Image." Applied Mechanics and Materials 346 (August 2013): 53–58. http://dx.doi.org/10.4028/www.scientific.net/amm.346.53.

Full text
Abstract:
Fundus abnormalities were usually diagnosed by clinical ophthalmologists using the equipment like ophthalmoscope, which was influenced by complex method, subjective fault and low efficiency. Fortunately, fundus can be shown in the key structures of the fundus digital image. Hence, it is necessary to develop a computer-aided fundus image processing system as a tool to assist in the diagnosis diseases. Algorithms were proposed including the pre-processing such as contrast enhancement and spatial filtering, binaryzation, morphology, boundary extraction and skeleton extraction. This paper preliminary summarized the key technologies of segmentation on the structures of optic cup, optic disc and blood vessel. Experiments were performed in MATLAB. Automatic algorithms were evaluated to be able to detect the key structures of the fundus digital image. It is expected to be applied in computer-aided fundus image processing system.
APA, Harvard, Vancouver, ISO, and other styles
34

J, Srujani, and K. Pramilarani. "An Image Processing Algorithm to Detect Exudates in Fundus Images." International Journal of Computer Sciences and Engineering 7, no. 4 (April 30, 2019): 96–99. http://dx.doi.org/10.26438/ijcse/v7i4.9699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

LEE, MICHAEL S., DAVID S. SHIN, and JEFFREY W. BERGER. "GRADING, IMAGE ANALYSIS, AND STEREOPSIS OF DIGITALLY COMPRESSED FUNDUS IMAGES." Retina 20, no. 3 (March 2000): 275–81. http://dx.doi.org/10.1097/00006982-200003000-00009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

En, Parvathy, and Bharadwaja Kumar G. "DIABETIC RETINOPATHY IMAGE CLASSIFICATION USING DEEP NEURAL NETWORK." Asian Journal of Pharmaceutical and Clinical Research 10, no. 13 (April 1, 2017): 461. http://dx.doi.org/10.22159/ajpcr.2017.v10s1.20512.

Full text
Abstract:
Healthcare is an important field where image classification has an excellent value. An alarming healthcare problem recognized by the WHO that theworld suffers is diabetic retinopathy (DR). DR is a global epidemic which leads to the vision loss. Diagnosing the disease using fundus images is a timeconsuming task and needs experience clinicians to detect the small changes. Here, we are proposing an approach to diagnose the DR and its severity levels from fundus images using convolutional neural network algorithm (CNN). Using CNN, we are developing a training model which identifies the features through iterations. Later, this training model will classify the retina images of patients according to the severity levels. In healthcare field, efficiency and accuracy is important, so using deep learning algorithms for image classification can address these problems efficiently.
APA, Harvard, Vancouver, ISO, and other styles
37

Betzler, Bjorn Kaijun, Henrik Hee Seung Yang, Sahil Thakur, Marco Yu, Ten Cheer Quek, Zhi Da Soh, Geunyoung Lee, et al. "Gender Prediction for a Multiethnic Population via Deep Learning Across Different Retinal Fundus Photograph Fields: Retrospective Cross-sectional Study." JMIR Medical Informatics 9, no. 8 (August 17, 2021): e25165. http://dx.doi.org/10.2196/25165.

Full text
Abstract:
Background Deep learning algorithms have been built for the detection of systemic and eye diseases based on fundus photographs. The retina possesses features that can be affected by gender differences, and the extent to which these features are captured via photography differs depending on the retinal image field. Objective We aimed to compare deep learning algorithms’ performance in predicting gender based on different fields of fundus photographs (optic disc–centered, macula-centered, and peripheral fields). Methods This retrospective cross-sectional study included 172,170 fundus photographs of 9956 adults aged ≥40 years from the Singapore Epidemiology of Eye Diseases Study. Optic disc–centered, macula-centered, and peripheral field fundus images were included in this study as input data for a deep learning model for gender prediction. Performance was estimated at the individual level and image level. Receiver operating characteristic curves for binary classification were calculated. Results The deep learning algorithms predicted gender with an area under the receiver operating characteristic curve (AUC) of 0.94 at the individual level and an AUC of 0.87 at the image level. Across the three image field types, the best performance was seen when using optic disc–centered field images (younger subgroups: AUC=0.91; older subgroups: AUC=0.86), and algorithms that used peripheral field images had the lowest performance (younger subgroups: AUC=0.85; older subgroups: AUC=0.76). Across the three ethnic subgroups, algorithm performance was lowest in the Indian subgroup (AUC=0.88) compared to that in the Malay (AUC=0.91) and Chinese (AUC=0.91) subgroups when the algorithms were tested on optic disc–centered images. Algorithms’ performance in gender prediction at the image level was better in younger subgroups (aged <65 years; AUC=0.89) than in older subgroups (aged ≥65 years; AUC=0.82). Conclusions We confirmed that gender among the Asian population can be predicted with fundus photographs by using deep learning, and our algorithms’ performance in terms of gender prediction differed according to the field of fundus photographs, age subgroups, and ethnic groups. Our work provides a further understanding of using deep learning models for the prediction of gender-related diseases. Further validation of our findings is still needed.
APA, Harvard, Vancouver, ISO, and other styles
38

Ilyasova, N. Yu, N. S. Demin, A. S. Shirokanev, A. V. Kupriyanov, and E. A. Zamytskiy. "Method for selection macular edema region using optical coherence tomography data." Computer Optics 44, no. 2 (April 2020): 250–58. http://dx.doi.org/10.18287/2412-6179-co-691.

Full text
Abstract:
The paper proposes a method for selection the region of diabetic macular edema in fundus images using OCT data. The relevance of the work is due to the need to create support systems for laser coagulation to increase its effectiveness. The proposed algorithm is based on a set of image segmentation methods, as well as searching for specific points and compiling their descriptors. The Canny method is used to find the boundary between the vitreous body and the retina in OCT images. The segmentation method, based on the Kruskal algorithm for constructing the minimum spanning tree of a weighted connected undirected graph, is used to select the retina to the pigment layer in the image. Using the results of segmentation, a map of the thickness of the retina of the eye and its deviation from the norm were constructed. In the course of the research, the optimal parameter values were selected in the Canny and graph segmentation algorithms, which allow to achieve a segmentation error of 5 %. SIFT, SURF, and AKAZE methods were considered for super-imposing calculated maps of the retina thickness and its deviation from the norm on the fundus image. In cases where a picture from the fundus camera of the OCT apparatus is provided along with OCT data, using the SURF method, it is possible to accurately combine with the fundus image.
APA, Harvard, Vancouver, ISO, and other styles
39

Avilés-Rodríguez, Gener José, Juan Iván Nieto-Hipólito, María de los Ángeles Cosío-León, Gerardo Salvador Romo-Cárdenas, Juan de Dios Sánchez-López, Patricia Radilla-Chávez, and Mabel Vázquez-Briseño. "Topological Data Analysis for Eye Fundus Image Quality Assessment." Diagnostics 11, no. 8 (July 23, 2021): 1322. http://dx.doi.org/10.3390/diagnostics11081322.

Full text
Abstract:
The objective of this work is to perform image quality assessment (IQA) of eye fundus images in the context of digital fundoscopy with topological data analysis (TDA) and machine learning methods. Eye health remains inaccessible for a large amount of the global population. Digital tools that automize the eye exam could be used to address this issue. IQA is a fundamental step in digital fundoscopy for clinical applications; it is one of the first steps in the preprocessing stages of computer-aided diagnosis (CAD) systems using eye fundus images. Images from the EyePACS dataset were used, and quality labels from previous works in the literature were selected. Cubical complexes were used to represent the images; the grayscale version was, then, used to calculate a persistent homology on the simplex and represented with persistence diagrams. Then, 30 vectorized topological descriptors were calculated from each image and used as input to a classification algorithm. Six different algorithms were tested for this study (SVM, decision tree, k-NN, random forest, logistic regression (LoGit), MLP). LoGit was selected and used for the classification of all images, given the low computational cost it carries. Performance results on the validation subset showed a global accuracy of 0.932, precision of 0.912 for label “quality” and 0.952 for label “no quality”, recall of 0.932 for label “quality” and 0.912 for label “no quality”, AUC of 0.980, F1 score of 0.932, and a Matthews correlation coefficient of 0.864. This work offers evidence for the use of topological methods for the process of quality assessment of eye fundus images, where a relatively small vector of characteristics (30 in this case) can enclose enough information for an algorithm to yield classification results useful in the clinical settings of a digital fundoscopy pipeline for CAD.
APA, Harvard, Vancouver, ISO, and other styles
40

Aziz, Tamoor, Ademola E. Ilesanmi, and Chalie Charoenlarpnopparut. "Efficient and Accurate Hemorrhages Detection in Retinal Fundus Images Using Smart Window Features." Applied Sciences 11, no. 14 (July 10, 2021): 6391. http://dx.doi.org/10.3390/app11146391.

Full text
Abstract:
Diabetic retinopathy (DR) is one of the diseases that cause blindness globally. Untreated accumulation of fat and cholesterol may trigger atherosclerosis in the diabetic patient, which may obstruct blood vessels. Retinal fundus images are used as diagnostic tools to screen abnormalities linked to diseases that affect the eye. Blurriness and low contrast are major problems when segmenting retinal fundus images. This article proposes an algorithm to segment and detect hemorrhages in retinal fundus images. The proposed method first performs preprocessing on retinal fundus images. Then a novel smart windowing-based adaptive threshold is utilized to segment hemorrhages. Finally, conventional and hand-crafted features are extracted from each candidate and classified by a support vector machine. Two datasets are used to evaluate the algorithms. Precision rate (P), recall rate (R), and F1 score are used for quantitative evaluation of segmentation methods. Mean square error, peak signal to noise ratio, information entropy, and contrast are also used to evaluate preprocessing method. The proposed method achieves a high F1 score with 83.85% for the DIARETDB1 image dataset and 72.25% for the DIARETDB0 image dataset. The proposed algorithm adequately adapts when compared with conventional algorithms, hence will act as a tool for segmentation.
APA, Harvard, Vancouver, ISO, and other styles
41

LIAPIS (Ι. Κ. ΛΙΑΠΗΣ), I. K. "Normal eye fundus in dog and cat." Journal of the Hellenic Veterinary Medical Society 52, no. 3 (January 31, 2018): 198. http://dx.doi.org/10.12681/jhvms.15445.

Full text
Abstract:
The term eye fundus is clinical and indicates the posterior part of eye globe, which is visible during ophthalmoscopy. In dogs mostly, but also in cats, eye fundus presents an important variability. All globe layers (retina,chorioid and sclera tuniques) could be visualized during ophthalmoscopy. Main anatomic components of fundus image are: a)Retinal vessels,b)The optic disc, c)The retinal pigment epithelium (invisible in albinoid animals) and d) The tapetum lucidum which gives the metallic nuance of fundus and can be hypoplastic or missing. The normalappearance of fundus is completed beyond the 16th week after birth. Until then the image is unclear. Careful estimation of numerous variations of normal eye fundus image is necessary, because plenty of them can be confused with pathologic situations.
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Jianglan, Yong-Jie Li, and Kai-Fu Yang. "Retinal fundus image enhancement with image decomposition and visual adaptation." Computers in Biology and Medicine 128 (January 2021): 104116. http://dx.doi.org/10.1016/j.compbiomed.2020.104116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Yerramsetti V Rao, Murthy V S S N, Eswari V, and Aruthra. "A significant progress of computer aided diagnosis system using automated glaucoma screening." International Journal of Research in Pharmaceutical Sciences 11, SPL4 (December 20, 2020): 243–47. http://dx.doi.org/10.26452/ijrps.v11ispl4.3778.

Full text
Abstract:
The automatic retinal image examination will be developing and significant screening device for initial recognition of eye diseases. This research proposes a computer aided diagnosis framework for initial recognition of glaucoma through Cup to Disc Ratio (CDR) measurement utilizing 2D fundus images. The system uses computer based analytical methods to procedure the patient data. The Glaucoma is chronic & progressive eye disease which damages optic nerve (ON) caused by improved intraocular pressure (IOP) of eye. The Glaucoma mostly affects on optic disc (OD) by enhancing cup size. It might lead to blindness whether not recognized initially. The glaucoma detection through “Heidelberg Retinal Tomography (HRT) optical Coherence Tomography (OCT)” have been more costly. This system proposes an efficient technique to recognize glaucoma utilizing 2D fundus image. The physical analysis of OD is a normal process utilized for identifying glaucoma. In this manuscript, we suggest automatic OD parameterization method is based on segmented OD and optic cup (OC) region attained from fundus retinal images. To automatically extract OD and optic cup, we used K-means clustering technique, SLIC (Simple linear iterative clustering) method, Gabor filter and thresholding. To reshape the attained disc and cup boundary ellipse fitting (EF) is applied to obtained image. We also propose a novel method which automatically calculates CDR from non-stereographic fundus camera images. The CDR is initial clinical indicator for glaucoma assessment. Also, we have calculated OD and cup area. These features are validated by classifying image either normal or glaucomatous for given patient.
APA, Harvard, Vancouver, ISO, and other styles
44

Diwakaran and S. Sheeba Jeya Sophia. "Survey on Automatic Detection of Glaucoma through Deep Learning Using Retinal Fundus Images." Journal of Biomedical Engineering and Medical Imaging 7, no. 4 (August 1, 2020): 11–15. http://dx.doi.org/10.14738/jbemi.74.8055.

Full text
Abstract:
Glaucoma - a disease which causes damage to our eye's optic nerve and subsequently blinds the vision. This occurs due to increased intraocular pressure (IOP) which causes the damage of optic nerve axons at the back of the eye, with eventual deterioration of vision. Presently, many works have been done towards automatic glaucoma detection using Fundus Images (FI) by extracting structural features. Structural features can be extracted from optic nerve head (ONH) analysis, cup to disc ratio (CDR) and Inferior, Superior, Nasal, Temporal (ISNT) rule in Fundus Image for glaucoma assessment.This survey presents various techniques for the early detection of glaucoma. Among the various techniques, retinal image-based detection plays a major role as it comes under non-invasive methods of detection. Here, a review and study were conducted for the different techniques of glaucoma detection using retinal fundus images. The objective of this survey is to obtain a technique which automatically analyze the retinal images of the eye with high efficiency and accuracy
APA, Harvard, Vancouver, ISO, and other styles
45

Russell, Greg, Silvia N. W. Hertzberg, Natalia Anisimova, Natalia Gavrilova, Beáta É. Petrovski, and Goran Petrovski. "Digital Image Analysis of the Angle and Optic Nerve: A Simple, Fast, and Low-Cost Method for Glaucoma Assessment." Journal of Ophthalmology 2020 (October 28, 2020): 1–8. http://dx.doi.org/10.1155/2020/3595610.

Full text
Abstract:
Purpose. To devise a simple, fast, and low-cost method for glaucoma assessment using digital image analysis of the angle and optic nerve in human subjects. Methods. Images from glaucoma and fundus assessment were used in this study, including color fundus photographs, standard optic nerve optical coherence tomography (OCT), and digital slit-lamp images of the angle/gonioscopy. Digital image conversion and analysis of the angle using ImageJ (NIH, USA) and adaptive histogram equalization contrast-limited AHE (CLAHE) to prevent noise amplification were implemented. Angle and optic nerve images were analyzed separately in the red, green, and blue (RGB) channels followed by 3D volumetric analysis of the degrees of angle depth and cup volume of the optic nerve. Horizontal tomogram reconstitution and nerve fiber detection methods were developed and compared to standard OCT images. Results. Digital slit-lamp angle images showed similar accuracy as standard anterior OCT measurements. Comparative analysis of RGB channels produced volumetric cup and horizontal tomogram, which closely resembled the 3D OCT appearance and B-scan of the cup, respectively. RGB channel splitting and image subtraction produced a map closely resembling that of the retinal nerve fiber layer (RNFL) thickness map on OCT. Conclusions. While OCT imaging is rapidly progressing in the area of optic disc and chamber angle assessment, rising healthcare costs and lack of availability of the technology open a demand for alternative and cost-minimizing forms of image analysis in glaucoma. Volumetric, geometric, and segmentational data obtained through digital image analysis correspond well to those obtained by OCT imaging.
APA, Harvard, Vancouver, ISO, and other styles
46

Radfar, Edalat, Hyunseon Yu, Tien Son Ho, Seung Ho Choi, and Byungjo Jung. "Depth perception on fundus images using a single-channel stereomicroscopy." Journal of Innovative Optical Health Sciences 14, no. 03 (March 29, 2021): 2150012. http://dx.doi.org/10.1142/s1793545821500127.

Full text
Abstract:
Typical fundus photography produces a two-dimensional image. This makes it difficult to observe the microvascular and neural abnormalities, because the depth of the image is missing. To provide depth appreciation, we develop a single-channel stereoscopic fundus video imaging system based on a rotating refractor. With respect to the pupil center, the rotating refractor laterally displaces the optical path and the illumination. This allows standard monocular fundus cameras to generate stereo-parallax and image disparity through sequential image acquisition. We optimize our imaging system, characterize the stereo-base, and image an eyeball model and a rabbit eye. When virtual realities are considered, our imaging system can be a simple yet efficient technique to provide depth perception in a virtual space that allows users to perceive abnormalities in the eye fundus.
APA, Harvard, Vancouver, ISO, and other styles
47

Vonghirandecha, P., P. Bhurayanontachai, S. Kansomkeat, and S. Intajag. "No-Reference Retinal Image Sharpness Metric Using Daubechies Wavelet Transform." International Journal of Circuits, Systems and Signal Processing 15 (August 26, 2021): 1064–71. http://dx.doi.org/10.46300/9106.2021.15.115.

Full text
Abstract:
Retinal fundus images are increasingly used by ophthalmologists both manually and without human intervention for detecting ocular diseases. Poor quality retinal images could lead to misdiagnosis or delayed treatment. Hence, a picture quality index was a crucial measure to ensure that the obtained images from acquisition system were suitable for reliable medical diagnosis. In this paper, a no-reference retinal image quality assessment based on wavelet transform is presented. A multiresolution Daubechies (db2) wavelet at level 4 was employed to decompose an original image into detail, and approximation sub-bands for extracting the sharpness information. The sharpness quality index was calculated from the entropy of the sub-bands. The proposed measure was validated by using images from the High-Resolution Fundus (HRF) dataset. The experimental results show that the proposed index performed more consistent with human visual perception and outperformed the Abdel-Hamid et al method.
APA, Harvard, Vancouver, ISO, and other styles
48

Shen, Yaxin, Bin Sheng, Ruogu Fang, Huating Li, Ling Dai, Skylar Stolte, Jing Qin, Weiping Jia, and Dinggang Shen. "Domain-invariant interpretable fundus image quality assessment." Medical Image Analysis 61 (April 2020): 101654. http://dx.doi.org/10.1016/j.media.2020.101654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Fredj, Amira Hadj, Mariem Ben Abdallah, Jihene Malek, and Ahmad Taher Azar. "Fundus image denoising using FPGA hardware architecture." International Journal of Computer Applications in Technology 54, no. 1 (2016): 1. http://dx.doi.org/10.1504/ijcat.2016.077791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Ramli, Roziana, Mohd Yamani Idna Idris, Khairunnisa Hasikin, Noor Khairiah A. Karim, Ainuddin Wahid Abdul Wahab, Ismail Ahmedy, Fatimah Ahmedy, and Hamzah Arof. "Local descriptor for retinal fundus image registration." IET Computer Vision 14, no. 4 (April 29, 2020): 144–53. http://dx.doi.org/10.1049/iet-cvi.2019.0623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography