Academic literature on the topic 'Scale-Invariant-Feature-Transform (SIFT)'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Scale-Invariant-Feature-Transform (SIFT).'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Scale-Invariant-Feature-Transform (SIFT)"
B.Daneshvar, M. "SCALE INVARIANT FEATURE TRANSFORM PLUS HUE FEATURE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W6 (August 23, 2017): 27–32. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w6-27-2017.
Full textCheung, W., and G. Hamarneh. "$n$-SIFT: $n$-Dimensional Scale Invariant Feature Transform." IEEE Transactions on Image Processing 18, no. 9 (September 2009): 2012–21. http://dx.doi.org/10.1109/tip.2009.2024578.
Full textTaha, Mohammed A., Hanaa M. Ahmed, and Saif O. Husain. "Iris Features Extraction and Recognition based on the Scale Invariant Feature Transform (SIFT)." Webology 19, no. 1 (January 20, 2022): 171–84. http://dx.doi.org/10.14704/web/v19i1/web19013.
Full textWu, Shu Guang, Shu He, and Xia Yang. "The Application of SIFT Method towards Image Registration." Advanced Materials Research 1044-1045 (October 2014): 1392–96. http://dx.doi.org/10.4028/www.scientific.net/amr.1044-1045.1392.
Full textA, Kalaiselvi, Sangeetha V, and Kasiselvanathan M. "Palm Pattern Recognition using Scale Invariant Feature Transform (SIFT)." International Journal of Intelligence and Sustainable Computing 1, no. 1 (2018): 1. http://dx.doi.org/10.1504/ijisc.2018.10023048.
Full textAzeem, A., M. Sharif, J. H. Shah, and M. Raza. "Hexagonal scale invariant feature transform (H-SIFT) for facial feature extraction." Journal of Applied Research and Technology 13, no. 3 (June 2015): 402–8. http://dx.doi.org/10.1016/j.jart.2015.07.006.
Full textQu, Zhong, and Zheng Yong Wang. "The Improved Algorithm of Scale Invariant Feature Transform on Palmprint Recognition." Advanced Materials Research 186 (January 2011): 565–69. http://dx.doi.org/10.4028/www.scientific.net/amr.186.565.
Full textYuehua Tao, Youming Xia, Tianwei Xu, and Xiaoxiao Chi. "Research Progress of the Scale Invariant Feature Transform (SIFT) Descriptors." Journal of Convergence Information Technology 5, no. 1 (February 28, 2010): 116–21. http://dx.doi.org/10.4156/jcit.vol5.issue1.13.
Full textWulandari, Irma. "FUSI CITRA DENGAN SCALE INVARIANT FEATURE TRANSFORM (SIFT) SEBAGAI REGISTRASI CITRA." Jurnal Ilmiah Informatika Komputer 25, no. 2 (2020): 137–46. http://dx.doi.org/10.35760/ik.2020.v25i2.2870.
Full textAriel, Muhammad Baresi, Ratri Dwi Atmaja, and Azizah Azizah. "Implementasi Metode Speed Up Robust Feature dan Scale Invariant Feature Transform untuk Identifikasi Telapak Kaki Individu." JURNAL Al-AZHAR INDONESIA SERI SAINS DAN TEKNOLOGI 3, no. 4 (December 28, 2017): 178. http://dx.doi.org/10.36722/sst.v3i4.232.
Full textDissertations / Theses on the topic "Scale-Invariant-Feature-Transform (SIFT)"
Decombas, Marc. "Compression vidéo très bas débit par analyse du contenu." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0067/document.
Full textThe objective of this thesis is to find new methods for semantic video compatible with a traditional encoder like H.264/AVC. The main objective is to maintain the semantic and not the global quality. A target bitrate of 300 Kb/s has been fixed for defense and security applications. To do that, a complete chain of compression has been proposed. A study and new contributions on a spatio-temporal saliency model have been done to extract the important information in the scene. To reduce the bitrate, a resizing method named seam carving has been combined with the H.264/AVC encoder. Also, a metric combining SIFT points and SSIM has been created to measure the quality of objects without being disturbed by less important areas containing mostly artifacts. A database that can be used for testing the saliency model but also for video compression has been proposed, containing sequences with their manually extracted binary masks. All the different approaches have been thoroughly validated by different tests. An extension of this work on video summary application has also been proposed
May, Michael. "Data analytics and methods for improved feature selection and matching." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/data-analytics-and-methods-for-improved-feature-selection-and-matching(965ded10-e3a0-4ed5-8145-2af7a8b5e35d).html.
Full textMurtin, Chloé Isabelle. "Traitement d’images de microscopie confocale 3D haute résolution du cerveau de la mouche Drosophile." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI081/document.
Full textAlthough laser scanning microscopy is a powerful tool for obtaining thin optical sections, the possible depth of imaging is limited by the working distance of the microscope objective but also by the image degradation caused by the attenuation of both excitation laser beam and the light emitted from the fluorescence-labeled objects. Several workaround techniques have been employed to overcome this problem, such as recording the images from both sides of the sample, or by progressively cutting off the sample surface. The different views must then be combined in a unique volume. However, a straightforward concatenation is often not possible, because the small rotations that occur during the acquisition procedure, not only in translation along x, y and z axes but also in rotation around those axis, making the fusion uneasy. To address this problem we implemented a new algorithm called 2D-SIFT-in-3D-Space using SIFT (scale Invariant Feature Transform) to achieve a robust registration of big image stacks. Our method register the images fixing separately rotations and translations around the three axes using the extraction and matching of stable features in 2D cross-sections. In order to evaluate the registration quality, we created a simulator that generates artificial images that mimic laser scanning image stacks to make a mock pair of image stacks one of which is made from the same stack with the other but is rotated arbitrarily with known angles and filtered with a known noise. For a precise and natural-looking concatenation of the two images, we also developed a module progressively correcting the sample brightness and contrast depending on the sample surface. Those tools we successfully used to generate tridimensional high resolution images of the fly Drosophila melanogaster brain, in particular, its octopaminergic and dopaminergic neurons and their synapses. Those monoamine neurons appear to be determinant in the correct operating of the central nervous system and a precise and systematic analysis of their evolution and interaction is necessary to understand its mechanisms. If an evolution over time could not be highlighted through the pre-synaptic sites analysis, our study suggests however that the inactivation of one of these neuron types triggers drastic changes in the neural network
Dellinger, Flora. "Descripteurs locaux pour l'imagerie radar et applications." Thesis, Paris, ENST, 2014. http://www.theses.fr/2014ENST0037/document.
Full textWe study here the interest of local features for optical and SAR images. These features, because of their invariances and their dense representation, offer a real interest for the comparison of satellite images acquired under different conditions. While it is easy to apply them to optical images, they offer limited performances on SAR images, because of their multiplicative noise. We propose here an original feature for the comparison of SAR images. This algorithm, called SAR-SIFT, relies on the same structure as the SIFT algorithm (detection of keypoints and extraction of features) and offers better performances for SAR images. To adapt these steps to multiplicative noise, we have developed a differential operator, the Gradient by Ratio, allowing to compute a magnitude and an orientation of the gradient robust to this type of noise. This operator allows us to modify the steps of the SIFT algorithm. We present also two applications for remote sensing based on local features. First, we estimate a global transformation between two SAR images with help of SAR-SIFT. The estimation is realized with help of a RANSAC algorithm and by using the matched keypoints as tie points. Finally, we have led a prospective study on the use of local features for change detection in remote sensing. The proposed method consists in comparing the densities of matched keypoints to the densities of detected keypoints, in order to point out changed areas
Dardas, Nasser Hasan Abdel-Qader. "Real-time Hand Gesture Detection and Recognition for Human Computer Interaction." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23499.
Full textDecombas, Marc. "Compression vidéo très bas débit par analyse du contenu." Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0067.
Full textThe objective of this thesis is to find new methods for semantic video compatible with a traditional encoder like H.264/AVC. The main objective is to maintain the semantic and not the global quality. A target bitrate of 300 Kb/s has been fixed for defense and security applications. To do that, a complete chain of compression has been proposed. A study and new contributions on a spatio-temporal saliency model have been done to extract the important information in the scene. To reduce the bitrate, a resizing method named seam carving has been combined with the H.264/AVC encoder. Also, a metric combining SIFT points and SSIM has been created to measure the quality of objects without being disturbed by less important areas containing mostly artifacts. A database that can be used for testing the saliency model but also for video compression has been proposed, containing sequences with their manually extracted binary masks. All the different approaches have been thoroughly validated by different tests. An extension of this work on video summary application has also been proposed
Dellinger, Flora. "Descripteurs locaux pour l'imagerie radar et applications." Electronic Thesis or Diss., Paris, ENST, 2014. http://www.theses.fr/2014ENST0037.
Full textWe study here the interest of local features for optical and SAR images. These features, because of their invariances and their dense representation, offer a real interest for the comparison of satellite images acquired under different conditions. While it is easy to apply them to optical images, they offer limited performances on SAR images, because of their multiplicative noise. We propose here an original feature for the comparison of SAR images. This algorithm, called SAR-SIFT, relies on the same structure as the SIFT algorithm (detection of keypoints and extraction of features) and offers better performances for SAR images. To adapt these steps to multiplicative noise, we have developed a differential operator, the Gradient by Ratio, allowing to compute a magnitude and an orientation of the gradient robust to this type of noise. This operator allows us to modify the steps of the SIFT algorithm. We present also two applications for remote sensing based on local features. First, we estimate a global transformation between two SAR images with help of SAR-SIFT. The estimation is realized with help of a RANSAC algorithm and by using the matched keypoints as tie points. Finally, we have led a prospective study on the use of local features for change detection in remote sensing. The proposed method consists in comparing the densities of matched keypoints to the densities of detected keypoints, in order to point out changed areas
Leoputra, Wilson Suryajaya. "Video foreground extraction for mobile camera platforms." Thesis, Curtin University, 2009. http://hdl.handle.net/20.500.11937/1384.
Full textHejl, Zdeněk. "Rekonstrukce 3D scény z obrazových dat." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236495.
Full textSaravi, Sara. "Use of Coherent Point Drift in computer vision applications." Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/12548.
Full textBook chapters on the topic "Scale-Invariant-Feature-Transform (SIFT)"
Burger, Wilhelm, and Mark J. Burge. "Scale-Invariant Feature Transform (SIFT)." In Texts in Computer Science, 609–64. London: Springer London, 2016. http://dx.doi.org/10.1007/978-1-4471-6684-9_25.
Full textBurger, Wilhelm, and Mark J. Burge. "Scale-Invariant Feature Transform (SIFT)." In Texts in Computer Science, 709–63. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-05744-1_25.
Full textYang, Donglei, Lili Liu, Feiwen Zhu, and Weihua Zhang. "A Parallel Analysis on Scale Invariant Feature Transform (SIFT) Algorithm." In Lecture Notes in Computer Science, 98–111. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24151-2_8.
Full textMaestas, Dominic R., Ron Lumia, Gregory Starr, and John Wood. "Scale Invariant Feature Transform (SIFT) Parametric Optimization Using Taguchi Design of Experiments." In Intelligent Robotics and Applications, 630–41. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-16584-9_61.
Full textShekar, B. H., M. Sharmila Kumari, Leonid M. Mestetskiy, and Natalia Dyshkant. "FLD-SIFT: Class Based Scale Invariant Feature Transform for Accurate Classification of Faces." In Computer Networks and Information Technologies, 15–21. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19542-6_3.
Full textChowdhary, Chiranji Lal. "Application of Object Recognition With Shape-Index Identification and 2D Scale Invariant Feature Transform for Key-Point Detection." In Feature Dimension Reduction for Content-Based Image Identification, 218–31. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5775-3.ch012.
Full textPark, Jae-Han, Kyung-Wook Park, Seung-Ho Baeg, and Moon-Hong Baeg. "pi-SIFT: A Photometric and Scale Invariant Feature Transform." In Pattern Recognition Recent Advances. InTech, 2010. http://dx.doi.org/10.5772/9346.
Full textGovindarajan, Satyavratan, and Ramakrishnan Swaminathan. "Performance of SURF and SIFT Keypoints for the Automated Differentiation of Abnormality in Chest Radiographs." In Studies in Health Technology and Informatics. IOS Press, 2021. http://dx.doi.org/10.3233/shti210219.
Full textDas, Tapan Kumar. "Logo Matching and Recognition Based on Context." In Feature Dimension Reduction for Content-Based Image Identification, 164–76. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5775-3.ch009.
Full textSalahat, Ehab Najeh, and Murad Qasaimeh. "Recent Advances in Feature Extraction and Description Algorithms." In Computer Vision, 27–57. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5204-8.ch002.
Full textConference papers on the topic "Scale-Invariant-Feature-Transform (SIFT)"
Park, Jae-Han, Kyung-Wook Park, Seung-Ho Baeg, and Moon-Hong Baeg. "π-SIFT: A photometric and Scale Invariant Feature Transform." In 2008 19th International Conference on Pattern Recognition (ICPR). IEEE, 2008. http://dx.doi.org/10.1109/icpr.2008.4761181.
Full textKaduhm, Haider S., and Hameed M. Abduljabbar. "Texture image classification using scale invariant feature transform (SIFT) method." In TECHNOLOGIES AND MATERIALS FOR RENEWABLE ENERGY, ENVIRONMENT AND SUSTAINABILITY: TMREES22Fr. AIP Publishing, 2023. http://dx.doi.org/10.1063/5.0129552.
Full textQasaimeh, Murad, Assim Sagahyroon, and Tamer Shanableh. "A parallel hardware architecture for Scale Invariant Feature Transform (SIFT)." In 2014 International Conference on Multimedia Computing and Systems (ICMCS). IEEE, 2014. http://dx.doi.org/10.1109/icmcs.2014.6911251.
Full textZhang, Guimei, Binbin Chen, and YangQuan Chen. "Research on Image Matching Combining on Fractional Differential With Scale Invariant Feature Transform." In ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/detc2015-47015.
Full textCheung, Warren, and Ghassan Hamarneh. "N-SIFT: N-DIMENSIONAL SCALE INVARIANT FEATURE TRANSFORM FOR MATCHING MEDICAL IMAGES." In 2007 4th IEEE International Symposium on Biomedical Imaging: From Nano to Macro. IEEE, 2007. http://dx.doi.org/10.1109/isbi.2007.356953.
Full textHermansyah, Adi, Arif Nugroho, Arief Kurniawan, Supeno Mardi Susiki Nugroho, and Eko Mulyanto Yuniarno. "Panoramic of Image Reconstruction Based on Geospatial Data using SIFT (Scale Invariant Feature Transform)." In 2019 International Seminar on Intelligent Technology and Its Applications (ISITIA). IEEE, 2019. http://dx.doi.org/10.1109/isitia.2019.8937152.
Full textRahman, Aviv Yuniar, Surya Sumpeno, and Mauridhi Hery Purnomo. "Arca Detection and Matching Using Scale Invariant Feature Transform (SIFT) Method of Stereo Camera." In 2017 International Conference on Soft Computing, Intelligent System and Information Technology (ICSIIT). IEEE, 2017. http://dx.doi.org/10.1109/icsiit.2017.45.
Full textWidyastuti, Rifka, and Chuan-Kai Yang. "Cat’s Nose Recognition Using You Only Look Once (Yolo) and Scale-Invariant Feature Transform (SIFT)." In 2018 IEEE 7th Global Conference on Consumer Electronics (GCCE). IEEE, 2018. http://dx.doi.org/10.1109/gcce.2018.8574870.
Full textChe Hussin, Nuril Aslina, Nursuriati Jamil, Sharifalillah Nordin, and Khalil Awang. "Plant species identification by using Scale Invariant Feature Transform (SIFT) and Grid Based Colour Moment (GBCM)." In 2013 IEEE Conference on Open Systems (ICOS). IEEE, 2013. http://dx.doi.org/10.1109/icos.2013.6735079.
Full textSumiharto, Raden, Ristya Ginanjar Putra, and Samuel Demetouw. "Methods for Determining Nitrogen, Phosphorus, and Potassium (NPK) Nutrient Content Using Scale-Invariant Feature Transform (SIFT)." In 2020 8th International Conference on Information and Communication Technology (ICoICT). IEEE, 2020. http://dx.doi.org/10.1109/icoict49345.2020.9166292.
Full text