Academic literature on the topic 'Rotation invariant feature'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Rotation invariant feature.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Rotation invariant feature"

1

Lai, Yi Qiang. "Rotation Moment Invariant Feature Extraction Techniques for Image Matching." Applied Mechanics and Materials 721 (December 2014): 775–78. http://dx.doi.org/10.4028/www.scientific.net/amm.721.775.

Full text
Abstract:
In recently years, extracting images invariance features are gaining more attention in image matching field. Various types of methods have been used to match image successfully in a number of applications. But in mostly literatures, the rotation moment invariant properties of these invariants have not been studied widely. In this paper, we present a novel method based on Polar Harmonic Transforms (PHTs) which is consisted of a set of orthogonal projection bases to extract rotation moment invariant features. The experimental results show that the kernel computation of PHTs is simple and image features is extracted accurately in image matching. Hence polar harmonic transforms have provided a powerful tool for image matching.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Yilin, Xuqiang Shao, and Zhaohui Wu. "Rotation Invariant Predictor-Corrector for Smoothed Particle Hydrodynamics Data Visualization." Symmetry 13, no. 3 (February 26, 2021): 382. http://dx.doi.org/10.3390/sym13030382.

Full text
Abstract:
In order to extract the vortex features more accurately, a new method of vortex feature extraction on the Smoothed Particle Hydrodynamics data is proposed in the current study by combining rotation invariance and predictor-corrector method. There is a limitation in the original rotation invariance, which can only extract the vortex features that perform equal-speed rotations. The limitation is slightly weakened to a situation that the rotation invariance can be used, given that a specific axis is existed in the fluid to replace the axis needed for it. Therefore, as long as the axis exists, the modified rotation invariant method can be used. Meanwhile, the vortex features are extracted by predictor-corrector method. By calculating the cross product of the parallel vector field, the seed candidates of vortex core lines can be obtained, and the real seed points can be gained from the rotation invariant Jacobian. Finally, the seed point and a series of candidates based on the predictor-corrector method are connected to draw the vortex core lines. Compared with the original method, the rotation invariant predictor-corrector method not only expands the application scope, but also ensures the accuracy of extraction. Our method adds the steps of calculating the rotation invariant Jacobian, the performance is slightly lower, but with the increase of the particle number, the performance gradually tends to the original method.
APA, Harvard, Vancouver, ISO, and other styles
3

You, Yang, Yujing Lou, Qi Liu, Yu-Wing Tai, Lizhuang Ma, Cewu Lu, and Weiming Wang. "Pointwise Rotation-Invariant Network with Adaptive Sampling and 3D Spherical Voxel Convolution." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 12717–24. http://dx.doi.org/10.1609/aaai.v34i07.6965.

Full text
Abstract:
Point cloud analysis without pose priors is very challenging in real applications, as the orientations of point clouds are often unknown. In this paper, we propose a brand new point-set learning framework PRIN, namely, Pointwise Rotation-Invariant Network, focusing on rotation-invariant feature extraction in point clouds analysis. We construct spherical signals by Density Aware Adaptive Sampling to deal with distorted point distributions in spherical space. In addition, we propose Spherical Voxel Convolution and Point Re-sampling to extract rotation-invariant features for each point. Our network can be applied to tasks ranging from object classification, part segmentation, to 3D feature matching and label alignment. We show that, on the dataset with randomly rotated point clouds, PRIN demonstrates better performance than state-of-the-art methods without any data augmentation. We also provide theoretical analysis for the rotation-invariance achieved by our methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Pietikäinen, M., T. Ojala, and Z. Xu. "Rotation-invariant texture classification using feature distributions." Pattern Recognition 33, no. 1 (January 2000): 43–52. http://dx.doi.org/10.1016/s0031-3203(99)00032-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bin Sheng, Hanqiu Sun, Shunbin Chen, Xuehui Liu, and Enhua Wu. "Colorization Using the Rotation-Invariant Feature Space." IEEE Computer Graphics and Applications 31, no. 2 (March 2011): 24–35. http://dx.doi.org/10.1109/mcg.2011.18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pun, Chi-Man. "Rotation-invariant texture feature for image retrieval." Computer Vision and Image Understanding 89, no. 1 (January 2003): 24–43. http://dx.doi.org/10.1016/s1077-3142(03)00012-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yu, Gang, Ying Zi Lin, and Sagar Kamarthi. "Wavelets-Based Feature Extraction for Texture Classification." Advanced Materials Research 97-101 (March 2010): 1273–76. http://dx.doi.org/10.4028/www.scientific.net/amr.97-101.1273.

Full text
Abstract:
Texture classification is a necessary task in a wider variety of application areas such as manufacturing, textiles, and medicine. In this paper, we propose a novel wavelet-based feature extraction method for robust, scale invariant and rotation invariant texture classification. The method divides the 2-D wavelet coefficient matrices into 2-D clusters and then computes features from the energies inherent in these clusters. The features that contain the information effective for classifying texture images are computed from the energy content of the clusters, and these feature vectors are input to a neural network for texture classification. The results show that the discrimination performance obtained with the proposed cluster-based feature extraction method is superior to that obtained using conventional feature extraction methods, and robust to the rotation and scale invariant texture classification.
APA, Harvard, Vancouver, ISO, and other styles
8

Ajayi, O. G. "PERFORMANCE ANALYSIS OF SELECTED FEATURE DESCRIPTORS USED FOR AUTOMATIC IMAGE REGISTRATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2020 (August 21, 2020): 559–66. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2020-559-2020.

Full text
Abstract:
Abstract. Automatic detection and extraction of corresponding features is very crucial in the development of an automatic image registration algorithm. Different feature descriptors have been developed and implemented in image registration and other disciplines. These descriptors affect the speed of feature extraction and the measure of extracted conjugate features, which affects the processing speed and overall accuracy of the registration scheme. This article is aimed at reviewing the performance of most-widely implemented feature descriptors in an automatic image registration scheme. Ten (10) descriptors were selected and analysed under seven (7) conditions viz: Invariance to rotation, scale and zoom, their robustness, repeatability, localization and efficiency using UAV acquired images. The analysis shows that though four (4) descriptors performed better than the other Six (6), no single feature descriptor can be affirmed to be the best, as different descriptors perform differently under different conditions. The Modified Harris and Stephen Corner Detector (MHCD) proved to be invariant to scale and zoom while it is excellent in robustness, repeatability, localization and efficiency, but it is variant to rotation. Also, the Scale Invariant feature Transform (SIFT), Speeded Up Robust Features (SURF) and the Maximally Stable Extremal Region (MSER) algorithms proved to be invariant to scale, zoom and rotation, and very good in terms of repeatability, localization and efficiency, though MSER proved to be not as robust as SIFT and SURF. The implication of the findings of this research is that the choice of feature descriptors must be informed by the imaging conditions of the image registration analysts.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Guoli, Bin Fan, Zhili Zhou, and Chunhong Pan. "Ordinal pyramid coding for rotation invariant feature extraction." Neurocomputing 242 (June 2017): 150–60. http://dx.doi.org/10.1016/j.neucom.2017.02.071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ye, Zhang, and Qu Hongsong. "Rotation invariant feature lines transform for image matching." Journal of Electronic Imaging 23, no. 5 (September 5, 2014): 053002. http://dx.doi.org/10.1117/1.jei.23.5.053002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Rotation invariant feature"

1

Mathew, Alex. "Rotation Invariant Histogram Features for Object Detection and Tracking in Aerial Imagery." University of Dayton / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1397662849.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Accordino, Andrea. "Studio e sviluppo di descrittori locali per nuvole di punti basati su proprietà geometriche." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17919/.

Full text
Abstract:
In questo lavoro sono stati proposti due nuovi descrittori per cloud point: ReSHOT e KPL-Descriptor. Inoltre sono state testate delle idee per migliorare le performance di tutta la pipeline di feature matching. Il lavoro comprende una fase di comparazione con i descrittori preesistenti.
APA, Harvard, Vancouver, ISO, and other styles
3

Hamsici, Onur C. "Bayes Optimality in Classification, Feature Extraction and Shape Analysis." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1218513562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Galante, Annamaria. "Studio di CNNs sferiche per l'apprendimento di descrittori locali su Point Cloud." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18680/.

Full text
Abstract:
Nell'ambito della Computer Vision assume sempre maggiore importanza la 3D Computer Vision. Diversi sono i task e le applicazioni della 3D CV, così come diverse sono le possibili rappresentazioni dei dati. Molti di questi task richiedono la ricerca di corrispondenze tra due o più scene\oggetti 3D. Queste corrispondenze vengono individuate tramite il paradigma di Feature Matching, composto da tre step: detection, description, matching. Le performance della pipe line di feature matching sono strettamente correlate alle tecniche utilizzate in fase di description. La creazione di descriptor compatti, informativi e invarianti alla rotazione è un problema tutt’altro che risolto in letteratura. Recentemente sono state proposte delle architetture basate su reti convoluzionali sferiche, per il calcolo di descrittori globali da utilizzare in task come shape classification. Questi approcci, grazie alla loro trattazione matematica, permettono di essere equivarianti alla rotazione. Lo scopo di questo elaborato di tesi è quello di fornire una panoramica dei metodi presenti allo stato dell’arte e proporre un’architettura basata su spherical cnns per apprendere un descrittore locale da usare su nuvole di punti.
APA, Harvard, Vancouver, ISO, and other styles
5

Lin, Chun Hui, and 林俊輝. "Iris recognition based on 2D rotation invariant feature." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/85694463277256198439.

Full text
Abstract:
碩士
玄奘大學
資訊管理學系碩士班
103
For iris recognition, it will result in recognition errors or low recognition rate if the eye image was captured under rotation or displacement in the image plane. To deal with the problem of image rotation, this paper combines the features of rotational invariance to solve the problem of low recognition rate by using the local binary pattern (LBP) that preserves the regional characteristics of iris images. LBP is usually used to describe changes of the texture patterns in images. The main advantage of LBP is its simple operation and the characteristics of avoiding shadow effects such that it is suitable for real-time systems. The rotational invariance is characterized by a unified method to reduce the dimension of rotated features of iris images and the coding of rotational invariance also reduce the degree of difference between iris features. Finally, the captured iris feature combines the weighted value using an iris mask in order to improve the total recognition rate.
APA, Harvard, Vancouver, ISO, and other styles
6

Chao, Y. J., and 趙永正. "Limited rotation-invariant character recognition via feature extraction." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/27387804960292874507.

Full text
Abstract:
碩士
國立臺灣科技大學
工程技術研究所
82
A method of constructing a modified circular harmonic filter using the feature extraction approach to recognize limited, rotative characters is proposed.The proposed method does not only improve the recognition ability of the filter for the rotated pattern ,but also could decrease the number of filters. Due to the fact that the constructing procedure of the method is straightforward and experimental equipments are limited,we expect a great potential of this approach in application. To check the validity of this method, a simulation result is also presented.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Shih-Min, and 陳士民. "Rotation, Translation, and Scale Invariant Bag of Feature based on Feature Density." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/14962400969645161689.

Full text
Abstract:
碩士
國立中正大學
資訊工程研究所
103
In human vision, people can easily recognize object in image with any size, location at any position, at any angle, and with complicated background. But in computer vision, it is hard to achieve image recognition with such invariance. Spatial Pyramid Matching (SPM) has excellent performance on computer vision applications. However, SPM still meets the difficulty when the position of object changes in images. In recent year, researchers try to find a robust representation. For example, translation invariant, rotation invariant, and scale invariant features. There are works trying to solve this issue. However, they just deal with one of three invariants respectively. It lacks a robust representation that can handle three invariant simultaneously. In our work, we aim to develop a robust feature that achieves translation, rotation, and scale invariant simultaneously. To handle this problem, we propose a novel method named Block Based Integral Image to search the densest region of features and constraint the region size similar to a predefined region size, and further find the approximated center of object in image. Then, we apply SPR by replacing the image center with the approximated object center to handle translation and rotation invariance problem. After that, we use histogram equalization to adjust captured representation for scale invariant. After the adjustment, a robust representation can be obtained to handle translation, rotation, and scale invariance simultaneously. Finally, we verify our system on different datasets on image classification task. Experimental results show that our system indeed can deal with translation, rotation, and scale invariant simultaneously and achieve higher accuracy than the previous methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Hsiao-Chung, Lee, and 李孝忠. "Extractio Method of Wavelet Texture Feature with Rotation-Invariant." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/04126070244997867903.

Full text
Abstract:
碩士
義守大學
資訊工程學系
92
Extraction Method of Wavelet Texture Feature With Rotation-Invariant Hsiao-Chung Lee, Shu-Lin Peng, Din-Yuen Chan Department of Information Engineering, I-Shou University Ta-Hsu Hiang, Kaohisung County, Taiwan, 84008 R.O.C Abstract In the real world, all of object has specific texture. In fact, texture is an apparently paradoxical notion. Since it is all the more difficult to appreciate texture similarity for such a high subjection, among the various textures, we then may choose to characterize perceptual attributes such as uniformity, coarseness, roughness, regularity, linearity, directionality, direction, frequency, phase, etc.. In the thesis, the major classes of texture processing problem such as on the various methods of extracting textural feature from images, rotation, translation and luminance invariant featured for texture image retrieval are investigated. The features are derived based on the Haar Wavelet Transform-2D and applied to the MPEG-7 frequency layout for texture feature extraction. By utilizing the proposed invariant features, the similarity measure between query and databases images provides reliable retrieval results even when the lighting three-dimensional rotation and orientation of images are charged.
APA, Harvard, Vancouver, ISO, and other styles
9

Lin, You-Tsai, and 林猷財. "Image Rotation Angle Estimation Using Rotation Invariant Features of Zernike Moments." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/10622265211779462331.

Full text
Abstract:
碩士
國立交通大學
電機與控制工程系
89
An estimation approach of image rotation angle is proposed here. A basic assumption that the center of image rotation must be known is made, so that we can estimate the precise image rotation angles from the comparison between the rotated images and the reference one. Furthermore, the Zernike moment algorithm with subsampling and interpolation techniques can increase the image resolution and compute Zernike moments more accurately on the premise that keep the image acquiring hardware instrument unchanged. In addition, a K-means clustering algorithm is used to classify valuable entries from all the rotation angle candidates extracted via Zernike moments and sum up them by weighting factors based on image reconstruction. After that, the estimation of image rotation angle with high resolution is derived. Two experiments are reported which are on rotated images generated by computer and captured by CCD camera, respectively. The experimental results show that the proposed method has good performance for image rotation angle estimation.
APA, Harvard, Vancouver, ISO, and other styles
10

"Rotation, shift and scale invariant wavelet features for content-based image retrieval and classification." 2002. http://library.cuhk.edu.hk/record=b6073477.

Full text
Abstract:
Pun Chi Man.
"July 2002."
Thesis (Ph.D.)--Chinese University of Hong Kong, 2002.
Includes bibliographical references (p. 119-127).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Mode of access: World Wide Web.
Abstracts in English and Chinese.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Rotation invariant feature"

1

Zeitlin, Vladimir. Vortex Dynamics on the f and beta Plane and Wave Radiation by Vortices. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198804338.003.0006.

Full text
Abstract:
Quasi-geostrophic dynamics being essentially the vortex dynamics, the main notions of vortex dynamics in the plane are introduced in this chapter. Dynamics of vorticity is treated both in Eulerian and Lagrangian descriptions. Dynamics of point vortices and vortex patches (contour dynamics) are recalled, as well as discretisations of the vorticity equation preserving Casimir invariants, which reflect Lagrangian conservation of vorticity. The influence of the beta effect upon vortices is illustrated, and exact modon solutions of the QG equations on the f and beta planes are constructed. Basic notions of turbulence and specific features of two dimensional turbulence are reviewed for future use. Lighthill radiation of gravity waves by vortices is illustrated on the example of a pair of point vortices, and back-reaction of the radiation upon the vortex system is demonstrated and analysed. Influence of rotation upon the Lighthill radiation is explained. Construction of the Kirchhoff vortex solution is proposed as a problem.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Rotation invariant feature"

1

Ibrahim, Muhammad Talal, Yongjin Wang, Ling Guan, and A. N. Venetsanopoulos. "Fingerprint Verification Using Rotation Invariant Feature Codes." In Lecture Notes in Computer Science, 111–19. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21596-4_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jundang, Nattapong, and Sanun Srisuk. "Rotation Invariant Texture Recognition Using Discriminant Feature Transform." In Advances in Visual Computing, 440–47. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33191-6_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Naidi, Yongfei Ye, Xinghua Sun, Junhua Liang, and Peng Sun. "Rotation Invariant Feature Extracting of Seal Images Based on PCNN." In Lecture Notes in Electrical Engineering, 531–40. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-0539-8_53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ersi, Ehsan Fazl, and John S. Zelek. "Rotation-Invariant Facial Feature Detection Using Gabor Wavelet and Entropy." In Lecture Notes in Computer Science, 1040–47. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11559573_126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Al-Zubaidi, Arkan, Lei Chen, Johann Hagenah, and Alfred Mertins. "Robust Feature for Transcranial Sonography Image Classification Using Rotation-Invariant Gabor Filter." In Bildverarbeitung für die Medizin 2013, 271–76. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36480-8_48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yu, Ruixuan, Xin Wei, Federico Tombari, and Jian Sun. "Deep Positional and Relational Feature Learning for Rotation-Invariant Point Cloud Analysis." In Computer Vision – ECCV 2020, 217–33. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58607-2_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Yu, Fei Ren, Xiaohua Wan, Xuan Wang, and Fa Zhang. "An Improved Correlation Method Based on Rotation Invariant Feature for Automatic Particle Selection." In Bioinformatics Research and Applications, 114–25. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-08171-7_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sarkar, Soumyajit, Jizhong Liu, and Guanghui Wang. "Biometric Analysis of Human Ear Matching Using Scale and Rotation Invariant Feature Detectors." In Lecture Notes in Computer Science, 186–93. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-20801-5_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Malagi, Vindhya P., and D. R. Ramesh Babu. "Rotation-Invariant Fast Feature Based Image Registration for Motion Compensation in Aerial Image Sequences." In Proceedings of International Conference on Cognition and Recognition, 211–21. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-5146-3_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Skibbe, Henrik, Marco Reisert, Olaf Ronneberger, and Hans Burkhardt. "Increasing the Dimension of Creativity in Rotation Invariant Feature Design Using 3D Tensorial Harmonics." In Lecture Notes in Computer Science, 141–50. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03798-6_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Rotation invariant feature"

1

"ROTATION INVARIANT FEATURE EXTRACTION FOR WATERMARKING." In International Conference on Security and Cryptography. SciTePress - Science and and Technology Publications, 2008. http://dx.doi.org/10.5220/0001931502290235.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Yiliang, Zhichun Mu, and Hui Zeng. "A rotation invariant feature extraction for 3D ear recognition." In 2013 25th Chinese Control and Decision Conference (CCDC). IEEE, 2013. http://dx.doi.org/10.1109/ccdc.2013.6561586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ye, Zhang, Wang Yanjie, and Qu Hongsong Changchun. "Rotation and scaling invariant feature lines for image matching." In 2011 International Conference on Mechatronic Science, Electric Engineering and Computer (MEC). IEEE, 2011. http://dx.doi.org/10.1109/mec.2011.6025667.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kobayashi, Takumi, Koiti Hasida, and Nobuyuki Otsu. "Rotation invariant feature extraction from 3-D acceleration signals." In ICASSP 2011 - 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2011. http://dx.doi.org/10.1109/icassp.2011.5947150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Khumalo, P. P., J. R. Tapamo, and F. van den Bergh. "Rotation invariant texture feature algorithms for urban settlement classification." In IGARSS 2011 - 2011 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2011. http://dx.doi.org/10.1109/igarss.2011.6049177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Shih-Min, and Chen-Kuo Chiang. "Rotation, Translation, and Scale Invariant Bag of Feature Based on Feature Density." In 2016 7th International Conference on Intelligent Systems, Modelling and Simulation (ISMS). IEEE, 2016. http://dx.doi.org/10.1109/isms.2016.12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Follmann, Patrick, and Tobias Bottger. "A Rotationally-Invariant Convolution Module by Feature Map Back-Rotation." In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2018. http://dx.doi.org/10.1109/wacv.2018.00091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Monika and Maroti Deshmukh. "Rotation Invariant Feature Extraction and Matching Methodology for IRIS Recognition." In 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS). IEEE, 2018. http://dx.doi.org/10.1109/iccons.2018.8663229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, G. Y., and W. F. Xie. "Rotation invariant feature extraction by combining denoising with Zernike moments." In 2010 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR). IEEE, 2010. http://dx.doi.org/10.1109/icwapr.2010.5576326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ramesh, B. E., B. Shadaksharappa, and Suryakanth V. Gangashetty. "Classification of Texture Rotation-Invariant in Images Using Feature Distributions." In International Conference on Computational Intelligence and Multimedia Applications (ICCIMA 2007). IEEE, 2007. http://dx.doi.org/10.1109/iccima.2007.130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography