To see the other types of publications on this topic, follow the link: Rotation invariant feature.

Journal articles on the topic 'Rotation invariant feature'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Rotation invariant feature.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lai, Yi Qiang. "Rotation Moment Invariant Feature Extraction Techniques for Image Matching." Applied Mechanics and Materials 721 (December 2014): 775–78. http://dx.doi.org/10.4028/www.scientific.net/amm.721.775.

Full text
Abstract:
In recently years, extracting images invariance features are gaining more attention in image matching field. Various types of methods have been used to match image successfully in a number of applications. But in mostly literatures, the rotation moment invariant properties of these invariants have not been studied widely. In this paper, we present a novel method based on Polar Harmonic Transforms (PHTs) which is consisted of a set of orthogonal projection bases to extract rotation moment invariant features. The experimental results show that the kernel computation of PHTs is simple and image features is extracted accurately in image matching. Hence polar harmonic transforms have provided a powerful tool for image matching.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Yilin, Xuqiang Shao, and Zhaohui Wu. "Rotation Invariant Predictor-Corrector for Smoothed Particle Hydrodynamics Data Visualization." Symmetry 13, no. 3 (February 26, 2021): 382. http://dx.doi.org/10.3390/sym13030382.

Full text
Abstract:
In order to extract the vortex features more accurately, a new method of vortex feature extraction on the Smoothed Particle Hydrodynamics data is proposed in the current study by combining rotation invariance and predictor-corrector method. There is a limitation in the original rotation invariance, which can only extract the vortex features that perform equal-speed rotations. The limitation is slightly weakened to a situation that the rotation invariance can be used, given that a specific axis is existed in the fluid to replace the axis needed for it. Therefore, as long as the axis exists, the modified rotation invariant method can be used. Meanwhile, the vortex features are extracted by predictor-corrector method. By calculating the cross product of the parallel vector field, the seed candidates of vortex core lines can be obtained, and the real seed points can be gained from the rotation invariant Jacobian. Finally, the seed point and a series of candidates based on the predictor-corrector method are connected to draw the vortex core lines. Compared with the original method, the rotation invariant predictor-corrector method not only expands the application scope, but also ensures the accuracy of extraction. Our method adds the steps of calculating the rotation invariant Jacobian, the performance is slightly lower, but with the increase of the particle number, the performance gradually tends to the original method.
APA, Harvard, Vancouver, ISO, and other styles
3

You, Yang, Yujing Lou, Qi Liu, Yu-Wing Tai, Lizhuang Ma, Cewu Lu, and Weiming Wang. "Pointwise Rotation-Invariant Network with Adaptive Sampling and 3D Spherical Voxel Convolution." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 12717–24. http://dx.doi.org/10.1609/aaai.v34i07.6965.

Full text
Abstract:
Point cloud analysis without pose priors is very challenging in real applications, as the orientations of point clouds are often unknown. In this paper, we propose a brand new point-set learning framework PRIN, namely, Pointwise Rotation-Invariant Network, focusing on rotation-invariant feature extraction in point clouds analysis. We construct spherical signals by Density Aware Adaptive Sampling to deal with distorted point distributions in spherical space. In addition, we propose Spherical Voxel Convolution and Point Re-sampling to extract rotation-invariant features for each point. Our network can be applied to tasks ranging from object classification, part segmentation, to 3D feature matching and label alignment. We show that, on the dataset with randomly rotated point clouds, PRIN demonstrates better performance than state-of-the-art methods without any data augmentation. We also provide theoretical analysis for the rotation-invariance achieved by our methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Pietikäinen, M., T. Ojala, and Z. Xu. "Rotation-invariant texture classification using feature distributions." Pattern Recognition 33, no. 1 (January 2000): 43–52. http://dx.doi.org/10.1016/s0031-3203(99)00032-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bin Sheng, Hanqiu Sun, Shunbin Chen, Xuehui Liu, and Enhua Wu. "Colorization Using the Rotation-Invariant Feature Space." IEEE Computer Graphics and Applications 31, no. 2 (March 2011): 24–35. http://dx.doi.org/10.1109/mcg.2011.18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pun, Chi-Man. "Rotation-invariant texture feature for image retrieval." Computer Vision and Image Understanding 89, no. 1 (January 2003): 24–43. http://dx.doi.org/10.1016/s1077-3142(03)00012-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yu, Gang, Ying Zi Lin, and Sagar Kamarthi. "Wavelets-Based Feature Extraction for Texture Classification." Advanced Materials Research 97-101 (March 2010): 1273–76. http://dx.doi.org/10.4028/www.scientific.net/amr.97-101.1273.

Full text
Abstract:
Texture classification is a necessary task in a wider variety of application areas such as manufacturing, textiles, and medicine. In this paper, we propose a novel wavelet-based feature extraction method for robust, scale invariant and rotation invariant texture classification. The method divides the 2-D wavelet coefficient matrices into 2-D clusters and then computes features from the energies inherent in these clusters. The features that contain the information effective for classifying texture images are computed from the energy content of the clusters, and these feature vectors are input to a neural network for texture classification. The results show that the discrimination performance obtained with the proposed cluster-based feature extraction method is superior to that obtained using conventional feature extraction methods, and robust to the rotation and scale invariant texture classification.
APA, Harvard, Vancouver, ISO, and other styles
8

Ajayi, O. G. "PERFORMANCE ANALYSIS OF SELECTED FEATURE DESCRIPTORS USED FOR AUTOMATIC IMAGE REGISTRATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2020 (August 21, 2020): 559–66. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2020-559-2020.

Full text
Abstract:
Abstract. Automatic detection and extraction of corresponding features is very crucial in the development of an automatic image registration algorithm. Different feature descriptors have been developed and implemented in image registration and other disciplines. These descriptors affect the speed of feature extraction and the measure of extracted conjugate features, which affects the processing speed and overall accuracy of the registration scheme. This article is aimed at reviewing the performance of most-widely implemented feature descriptors in an automatic image registration scheme. Ten (10) descriptors were selected and analysed under seven (7) conditions viz: Invariance to rotation, scale and zoom, their robustness, repeatability, localization and efficiency using UAV acquired images. The analysis shows that though four (4) descriptors performed better than the other Six (6), no single feature descriptor can be affirmed to be the best, as different descriptors perform differently under different conditions. The Modified Harris and Stephen Corner Detector (MHCD) proved to be invariant to scale and zoom while it is excellent in robustness, repeatability, localization and efficiency, but it is variant to rotation. Also, the Scale Invariant feature Transform (SIFT), Speeded Up Robust Features (SURF) and the Maximally Stable Extremal Region (MSER) algorithms proved to be invariant to scale, zoom and rotation, and very good in terms of repeatability, localization and efficiency, though MSER proved to be not as robust as SIFT and SURF. The implication of the findings of this research is that the choice of feature descriptors must be informed by the imaging conditions of the image registration analysts.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Guoli, Bin Fan, Zhili Zhou, and Chunhong Pan. "Ordinal pyramid coding for rotation invariant feature extraction." Neurocomputing 242 (June 2017): 150–60. http://dx.doi.org/10.1016/j.neucom.2017.02.071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ye, Zhang, and Qu Hongsong. "Rotation invariant feature lines transform for image matching." Journal of Electronic Imaging 23, no. 5 (September 5, 2014): 053002. http://dx.doi.org/10.1117/1.jei.23.5.053002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Wu, Ronald, and Henry Stark. "Rotation-invariant pattern recognition using optimum feature extraction." Applied Optics 24, no. 2 (January 15, 1985): 179. http://dx.doi.org/10.1364/ao.24.000179.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Yu, Ying, Cai Lin Dong, Bo Wen Sheng, Wei Dan Zhong, and Xiang Lin Zou. "The New Approach to the Invariant Feature Extraction Using Ridgelet Transform." Applied Mechanics and Materials 651-653 (September 2014): 2241–44. http://dx.doi.org/10.4028/www.scientific.net/amm.651-653.2241.

Full text
Abstract:
With the aim to meet the requirements of multi-directional choice, the paper raise a new approach to the invariant feature extraction of handwritten Chinese characters, with ridgelet transform as its foundation. First of all, the original images will be rotated to the Radon circular shift by means of Radon transform. On the basis of the characteristic that Fourier transform is row shift invariant, then, the one-dimensional Fourier transform will be adopted in the Radon domain to gain the conclusion that magnitude matrixes bear the rotation-invariance as a typical feature, which is pretty beneficial to the invariant feature extraction of rotation. When such is done, one-dimensional wavelet transform will be carried out in the direction of rows, thus achieving perfect choice of frequency, which makes it possible to extract the features of sub-line in the appropriate frequencies. Finally, the average values, standard deviations and the energy values will form the feature vector which is extracted from the ridgelet sub-bands. The approaches mentioned in the paper could satisfy the requirements from the form automatic processing on the recognition of handwritten Chinese characters.
APA, Harvard, Vancouver, ISO, and other styles
13

LI, QIAOLIANG, HUISHENG ZHANG, and TIANFU WANG. "SCALE INVARIANT FEATURE MATCHING USING ROTATION-INVARIANT DISTANCE FOR REMOTE SENSING IMAGE REGISTRATION." International Journal of Pattern Recognition and Artificial Intelligence 27, no. 02 (March 2013): 1354004. http://dx.doi.org/10.1142/s0218001413540049.

Full text
Abstract:
Scale invariant feature transform (SIFT) has been widely used in image matching. But when SIFT is introduced in the registration of remote sensing images, the keypoint pairs which are expected to be matched are often assigned two different value of main orientation owing to the significant difference in the image intensity between remote sensing image pairs, and therefore a lot of incorrect matches of keypoints will appear. This paper presents a method using rotation-invariant distance instead of Euclid distance to match the scale invariant feature vectors associated with the keypoints. In the proposed method, the feature vectors are reorganized into feature matrices, and fast Fourier transform (FFT) is introduced to compute the rotation-invariant distance between the matrices. Much more correct matches are obtained by the proposed method since the rotation-invariant distance is independent of the main orientation of the keypoints. Experimental results indicate that the proposed method improves the match performance compared to other state-of-art methods in terms of correct match rate and aligning accuracy.
APA, Harvard, Vancouver, ISO, and other styles
14

SASTRY, CHALLA S., ARUN K. PUJARI, and B. L. DEEKSHATULU. "A FOURIER-RADIAL DESCRIPTOR ALGORITHM FOR INVARIANT FEATURE EXTRACTION." International Journal of Wavelets, Multiresolution and Information Processing 04, no. 01 (March 2006): 197–212. http://dx.doi.org/10.1142/s0219691306001166.

Full text
Abstract:
By integrating the Fourier techniques and the edge information obtained using the radial symmetric functions, we propose in this paper an invariant feature extraction algorithm. Unlike the Gabor feature extraction method, the present method does not use direction dependent filters, nor does it use the images in polar form, for rotation invariance. Besides, the present Fourier-Radial invariant feature extraction algorithm, suitable for both the texture and non-texture images, has functional analogy with the Gabor feature extraction method, and hence, is easily implementable. It is mathematically proved, and justified through computations, that the method can generate the invariant and discriminative feature vectors. Our simulation results demonstrate that the method can be used for such applications as content-based image retrieval.
APA, Harvard, Vancouver, ISO, and other styles
15

B.Daneshvar, M. "SCALE INVARIANT FEATURE TRANSFORM PLUS HUE FEATURE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W6 (August 23, 2017): 27–32. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w6-27-2017.

Full text
Abstract:
This paper presents an enhanced method for extracting invariant features from images based on Scale Invariant Feature Transform (SIFT). Although SIFT features are invariant to image scale and rotation, additive noise, and changes in illumination but we think this algorithm suffers from excess keypoints. Besides, by adding the hue feature, which is extracted from combination of hue and illumination values in HSI colour space version of the target image, the proposed algorithm can speed up the matching phase. Therefore, we proposed the Scale Invariant Feature Transform plus Hue (SIFTH) that can remove the excess keypoints based on their Euclidean distances and adding hue to feature vector to speed up the matching process which is the aim of feature extraction. In this paper we use the difference of hue features and the Mean Square Error (MSE) of orientation histograms to find the most similar keypoint to the under processing keypoint. The keypoint matching method can identify correct keypoint among clutter and occlusion robustly while achieving real-time performance and it will result a similarity factor of two keypoints. Moreover removing excess keypoint by SIFTH algorithm helps the matching algorithm to achieve this goal.
APA, Harvard, Vancouver, ISO, and other styles
16

Franzius, Mathias, Niko Wilbert, and Laurenz Wiskott. "Invariant Object Recognition and Pose Estimation with Slow Feature Analysis." Neural Computation 23, no. 9 (September 2011): 2289–323. http://dx.doi.org/10.1162/neco_a_00171.

Full text
Abstract:
Primates are very good at recognizing objects independent of viewing angle or retinal position, and they outperform existing computer vision systems by far. But invariant object recognition is only one prerequisite for successful interaction with the environment. An animal also needs to assess an object's position and relative rotational angle. We propose here a model that is able to extract object identity, position, and rotation angles. We demonstrate the model behavior on complex three-dimensional objects under translation and rotation in depth on a homogeneous background. A similar model has previously been shown to extract hippocampal spatial codes from quasi-natural videos. The framework for mathematical analysis of this earlier application carries over to the scenario of invariant object recognition. Thus, the simulation results can be explained analytically even for the complex high-dimensional data we employed.
APA, Harvard, Vancouver, ISO, and other styles
17

CHEN, G. Y., and W. F. XIE. "CONTOUR-BASED FEATURE EXTRACTION USING DUAL-TREE COMPLEX WAVELETS." International Journal of Pattern Recognition and Artificial Intelligence 21, no. 07 (November 2007): 1233–45. http://dx.doi.org/10.1142/s0218001407005867.

Full text
Abstract:
A contour-based feature extraction method is proposed by using the dual-tree complex wavelet transform and the Fourier transform. Features are extracted from the 1D signals r and θ, and hence the processing memory and time are reduced. The approximate shift-invariant property of the dual-tree complex wavelet transform and the Fourier transform guarantee that this method is invariant to translation, rotation and scaling. The method is used to recognize aircrafts from different rotation angles and scaling factors. Experimental results show that it achieves better recognition rates than that which uses only the Fourier features and Granlund's method. Its success is due to the desirable shift invariant property of the dual-tree complex wavelet transform, the translation invariant property of the Fourier spectrum, and our new complete representation of the outer contour of the pattern.
APA, Harvard, Vancouver, ISO, and other styles
18

Ahmed, Farid, and Mohammad A. Karim. "Filter-feature-based rotation-invariant joint Fourier transform correlator." Applied Optics 34, no. 32 (November 10, 1995): 7556. http://dx.doi.org/10.1364/ao.34.007556.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Barajas-García, Carolina, Selene Solorza-Calderón, and Everardo Gutiérrez-López. "Scale, translation and rotation invariant Wavelet Local Feature Descriptor." Applied Mathematics and Computation 363 (December 2019): 124594. http://dx.doi.org/10.1016/j.amc.2019.124594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Chen, G. Y., T. D. Bui, and A. Krzyżak. "Rotation invariant feature extraction using Ridgelet and Fourier transforms." Pattern Analysis and Applications 9, no. 1 (May 2006): 83–93. http://dx.doi.org/10.1007/s10044-006-0028-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Khan, Sajid, Dong-Ho Lee, Asif Khan, Ahmad Waqas, Abdul Rehman Gilal, and Zahid Hussain Khand. "A Digital Camera-Based Rotation-Invariant Fingerprint Verification Method." Scientific Programming 2020 (May 15, 2020): 1–10. http://dx.doi.org/10.1155/2020/9758049.

Full text
Abstract:
Fingerprint registration and verification is an active area of research in the field of image processing. Usually, fingerprints are obtained from sensors; however, there is recent interest in using images of fingers obtained from digital cameras instead of scanners. An unaddressed issue in the processing of fingerprints extracted from digital images is the angle of the finger during image capture. To match a fingerprint with 100% accuracy, the angles of the matching features should be similar. This paper proposes a rotation and scale-invariant decision-making method for the intelligent registration and recognition of fingerprints. A digital image of a finger is taken as the input and compared with a reference image for derotation. Derotation is performed by applying binary segmentation on both images, followed by the application of speeded up robust feature (SURF) extraction and then feature matching. Potential inliers are extracted from matched features by applying the M-estimator. Matched inlier points are used to form a homography matrix, the difference in the rotation angles of the finger in both the input and reference images is calculated, and finally, derotation is performed. Input fingerprint features are extracted and compared or stored based on the decision support system required for the situation.
APA, Harvard, Vancouver, ISO, and other styles
22

Qu, Zhong, and Zheng Yong Wang. "The Improved Algorithm of Scale Invariant Feature Transform on Palmprint Recognition." Advanced Materials Research 186 (January 2011): 565–69. http://dx.doi.org/10.4028/www.scientific.net/amr.186.565.

Full text
Abstract:
This paper presents a new method of palmprint recognition based on improved scale invariant feature transform (SIFT) algorithm which combines the Euclidean distance and weighted sub-region. It has the scale, rotation, affine, perspective, illumination invariance, and also has good robustness to the target's motion, occlusion, noise and other factors. Simulation results show that the recognition rate of the improved SIFT algorithm is higher than the recognition rate of SIFT algorithm.
APA, Harvard, Vancouver, ISO, and other styles
23

Yang, Fengping, Bodi Ma, Jinrong Wang, Honggang Gao, and Zhenbao Liu. "Target Detection of UAV Aerial Image Based on Rotational Invariant Depth Denoising Automatic Encoder." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 38, no. 6 (December 2020): 1345–51. http://dx.doi.org/10.1051/jnwpu/20203861345.

Full text
Abstract:
The method of using unmanned aerial vehicle (UAV) to obtain aerial image information of target scene has the characteristics of wide coverage, strong mobility and high efficiency, which is widely used in urban traffic monitoring, vehicle detection, oil pipeline inspection, regional survey and other aspects. Aiming at the difficulties of the object to be detected in the process of aerial image object detection, such as multiple orientations, small image pixel size and UAV body vibration interference, a novel aerial image object detection model based on the rotation-invariant deep denoising auto encoder is proposed in this paper. Firstly, the interest region of the aerial image is extracted by the selective search method, and the radial gradient of interest region is calculated. Then, the rotation invariant feature descriptor is obtained from the radial gradient feature, and the noise in the original data is filtered out by the deep denoising automatic encoder and the deep feature of the feature descriptors is extracted. Finally, the experimental results show that this method can achieve high accuracy for aerial image target detection and has good rotation invariance.
APA, Harvard, Vancouver, ISO, and other styles
24

Krishnamoorthi, R., and S. Sathiya Devi. "Rotation Invariant Texture Image Retrieval with Orthogonal Polynomials Model." International Journal of Computer Vision and Image Processing 1, no. 4 (October 2011): 27–49. http://dx.doi.org/10.4018/ijcvip.2011100103.

Full text
Abstract:
The exponential growth of digital image data has created a great demand for effective and efficient scheme and tools for browsing, indexing and retrieving images from a collection of large image databases. To address such a demand, this paper proposes a new content based image retrieval technique with orthogonal polynomials model. The proposed model extracts texture features that represent the dominant directions, gray level variations and frequency spectrum of the image under analysis and the resultant texture feature vector becomes rotation and scale invariant. A new distance measure in the frequency domain called Deansat is proposed as a similarity measure that uses the proposed feature vector for efficient image retrieval. The efficiency of the proposed retrieval technique is experimented with the standard Brodatz, USC-SIPI and VisTex databases and is compared with Discrete Cosine Transform (DCT), Tree Structured Wavelet Transform (TWT) and Gabor filter based retrieval schemes. The experimental results reveal that the proposed method outperforms well with less computational cost.
APA, Harvard, Vancouver, ISO, and other styles
25

Wu, Shu Guang, Shu He, and Xia Yang. "The Application of SIFT Method towards Image Registration." Advanced Materials Research 1044-1045 (October 2014): 1392–96. http://dx.doi.org/10.4028/www.scientific.net/amr.1044-1045.1392.

Full text
Abstract:
The scale invariant features transform (SIFT) is commonly used in object recognition,According to the problems of large memory consumption and low computation speed in SIFT (Scale Invariant Feature Transform) algorithm.During the image registration methods based on point features,SIFT point feature is invariant to image scale and rotation, and provides robust matching across a substantial range of affine distortion. Experiments show that on the premise that registration accuracy is stable, the proposed algorithm solves the problem of high requirement of memory and the efficiency is improved greatly, which is applicable for registering remote sensing images of large areas.
APA, Harvard, Vancouver, ISO, and other styles
26

Javeed.S, Imran, Aanandha Saravanan, and Rajendra Kumar. "Efficient Biometric Recognition Methodology using Guided Filtering and SIFT Feature Matching." International Journal of Engineering & Technology 7, no. 3.1 (August 4, 2018): 23. http://dx.doi.org/10.14419/ijet.v7i3.1.16789.

Full text
Abstract:
A novel infrared finger vein biometric identification is proposed using Linear Gabor filter with Guidance image and SIFT feature matching. Linear Gabor filter with guidance image is used for extracting finger vein pattern without segmentation processing and also performs well with some poor quality images due to low contrast, illuminance imbalance or noise etc. Firstly, we utilized Guided Linear Gabor filter for ridge detection as simple Linear Gabor filter and also enhance the image by performing edge preserving smoothing operation. Secondly we utilized SIFT feature matching for verification. A SIFT (Scale Invariant Feature Transform) can extract features to posses rotation invariance and shift invariance for providing better matching rate. The simulation analysis shows our proposed system is an effective feature for finger vein biometric identification.
APA, Harvard, Vancouver, ISO, and other styles
27

Dong, En Zeng, Yan Hong Fu, and Ji Gang Tong. "Face Recognition by PCA and Improved LBP Fusion Algorithm." Applied Mechanics and Materials 734 (February 2015): 562–67. http://dx.doi.org/10.4028/www.scientific.net/amm.734.562.

Full text
Abstract:
This paper proposed a theoretically efficient approach for face recognition based on principal component analysis (PCA) and rotation invariant uniform local binary pattern texture features in order to weaken the effects of varying illumination conditions and facial expressions. Firstly, the rotation invariant uniform LBP operator was adopted to extract the local texture feature of the face images. Then PCA method was used to reduce the dimensionality of the extracted feature and get the eigenfaces. Finally, the nearest distance classification was used to distinguish each face. The method has been accessed on Yale and ATR-Jaffe face databases. Results demonstrate that the proposed method is superior to standard PCA and its recognition rate is higher than the traditional PCA. And the proposed algorithm has strong robustness against the illumination changes, pose, rotation and expressions.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhao, Z. M., X. M. Gao, D. N. Jiang, and Y. Q. Zhang. "RESEARCH ON AIRCRAFT TARGET DETECTION ALGORITHM BASED ON IMPROVED RADIAL GRADIENT TRANSFORMATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3 (April 30, 2018): 2449–51. http://dx.doi.org/10.5194/isprs-archives-xlii-3-2449-2018.

Full text
Abstract:
Aiming at the problem that the target may have different orientation in the unmanned aerial vehicle (UAV) image, the target detection algorithm based on the rotation invariant feature is studied,and this paper proposes a method of RIFF (Rotation-Invariant Fast Features) based on look up table and polar coordinate acceleration to be used for aircraft target detection. The experiment shows that the detection performance of this method is basically equal to the RIFF, and the operation efficiency is greatly improved.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhengli Zhu, Chunxia Zhao, Yingkun Hou, and Hua Gao. "Rotation-Invariant Texture Image Retrieval Based on Combined Feature Sets." International Journal of Digital Content Technology and its Applications 5, no. 3 (March 31, 2011): 287–92. http://dx.doi.org/10.4156/jdcta.vol5.issue3.28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Bezerra Ramalho, Geraldo L., Daniel S. Ferreira, Pedro P. Rebouças Filho, and Fátima N. Sombra de Medeiros. "Rotation-invariant feature extraction using a structural co-occurrence matrix." Measurement 94 (December 2016): 406–15. http://dx.doi.org/10.1016/j.measurement.2016.08.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Hayati, Saeed, and Mohammad Reza Ahmadzadeh. "WIRIF: Wave interference-based rotation invariant feature for texture description." Signal Processing 151 (October 2018): 160–71. http://dx.doi.org/10.1016/j.sigpro.2018.05.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Sastry, Ch S., Arun K. Pujari, B. L. Deekshatulu, and C. Bhagvati. "A wavelet based multiresolution algorithm for rotation invariant feature extraction." Pattern Recognition Letters 25, no. 16 (December 2004): 1845–55. http://dx.doi.org/10.1016/j.patrec.2004.07.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

LI, Lei-da, Bao-long GUO, and Lei GUO. "Rotation, scaling and translation invariant image watermarking using feature points." Journal of China Universities of Posts and Telecommunications 15, no. 2 (June 2008): 82–87. http://dx.doi.org/10.1016/s1005-8885(08)60089-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Torney, Colin J., Andrew P. Dobson, Felix Borner, David J. Lloyd-Jones, David Moyer, Honori T. Maliti, Machoke Mwita, Howard Fredrick, Markus Borner, and J. Grant C. Hopcraft. "Assessing Rotation-Invariant Feature Classification for Automated Wildebeest Population Counts." PLOS ONE 11, no. 5 (May 26, 2016): e0156342. http://dx.doi.org/10.1371/journal.pone.0156342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Ezhilmaran, D., and Rose Bindu Joseph. "Fuzzy based finger vein recognition with rotation invariant feature matching." IOP Conference Series: Materials Science and Engineering 263 (November 2017): 042155. http://dx.doi.org/10.1088/1757-899x/263/4/042155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Pan, Zhibin, Zhengyi Li, Hongcheng Fan, and Xiuquan Wu. "Feature based local binary pattern for rotation invariant texture classification." Expert Systems with Applications 88 (December 2017): 238–48. http://dx.doi.org/10.1016/j.eswa.2017.07.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Wang, Shiqi, Mingfang Jiang, Jiaohua Qin, Hengfu Yang, and Zhichen Gao. "A Secure Rotation Invariant LBP Feature Computation in Cloud Environment." Computers, Materials & Continua 68, no. 3 (2021): 2979–93. http://dx.doi.org/10.32604/cmc.2021.017094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

HYDER, MASHUD, MD MONIRUL ISLAM, M. A. H. AKHAND, and KAZUYUKI MURASE. "SYMMETRY AXIS BASED OBJECT RECOGNITION UNDER TRANSLATION, ROTATION AND SCALING." International Journal of Neural Systems 19, no. 01 (February 2009): 25–42. http://dx.doi.org/10.1142/s0129065709001811.

Full text
Abstract:
This paper presents a new approach, known as symmetry axis based feature extraction and recognition (SAFER), for recognizing objects under translation, rotation and scaling. Unlike most previous invariant object recognition (IOR) systems, SAFER puts emphasis on both simplicity and accuracy of the recognition system. To achieve simplicity, it uses simple formulae for extracting invariant features from an object. The scheme used in feature extraction is based on the axis of symmetry and angles of concentric circles drawn around the object. SAFER divides the extracted features into a number of groups based on their similarity. To improve the recognition performance, SAFER uses a number of neural networks (NNs) instead of single NN are used for training and recognition of extracted features. The new approach, SAFER, has been tested on two of real world problems i.e., English characters with two different fonts and images of different shapes. The experimental results show that SAFER can produce good recognition performance in comparison with other algorithms.
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, Baifan, Hong Chen, Baojun Song, and Grace Gong. "TIF-Reg: Point Cloud Registration with Transform-Invariant Features in SE(3)." Sensors 21, no. 17 (August 27, 2021): 5778. http://dx.doi.org/10.3390/s21175778.

Full text
Abstract:
Three-dimensional point cloud registration (PCReg) has a wide range of applications in computer vision, 3D reconstruction and medical fields. Although numerous advances have been achieved in the field of point cloud registration in recent years, large-scale rigid transformation is a problem that most algorithms still cannot effectively handle. To solve this problem, we propose a point cloud registration method based on learning and transform-invariant features (TIF-Reg). Our algorithm includes four modules, which are the transform-invariant feature extraction module, deep feature embedding module, corresponding point generation module and decoupled singular value decomposition (SVD) module. In the transform-invariant feature extraction module, we design TIF in SE(3) (which means the 3D rigid transformation space) which contains a triangular feature and local density feature for points. It fully exploits the transformation invariance of point clouds, making the algorithm highly robust to rigid transformation. The deep feature embedding module embeds TIF into a high-dimension space using a deep neural network, further improving the expression ability of features. The corresponding point cloud is generated using an attention mechanism in the corresponding point generation module, and the final transformation for registration is calculated in the decoupled SVD module. In an experiment, we first train and evaluate the TIF-Reg method on the ModelNet40 dataset. The results show that our method keeps the root mean squared error (RMSE) of rotation within 0.5∘ and the RMSE of translation error close to 0 m, even when the rotation is up to [−180∘, 180∘] or the translation is up to [−20 m, 20 m]. We also test the generalization of our method on the TUM3D dataset using the model trained on Modelnet40. The results show that our method’s errors are close to the experimental results on Modelnet40, which verifies the good generalization ability of our method. All experiments prove that the proposed method is superior to state-of-the-art PCReg algorithms in terms of accuracy and complexity.
APA, Harvard, Vancouver, ISO, and other styles
40

Wiskott, Laurenz, and Terrence J. Sejnowski. "Slow Feature Analysis: Unsupervised Learning of Invariances." Neural Computation 14, no. 4 (April 1, 2002): 715–70. http://dx.doi.org/10.1162/089976602317318938.

Full text
Abstract:
Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decor-related features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
41

Lejeune, Claude, and Yunlong Sheng. "Optoneural system for invariant pattern recognition." Canadian Journal of Physics 71, no. 9-10 (September 1, 1993): 405–9. http://dx.doi.org/10.1139/p93-063.

Full text
Abstract:
An optoneural system is developed for invariant pattern recognition. The system consists of an optical correlator and a neural network. The correlator uses Fourier–Mellin spatial filters (FMF) for feature extraction. The FMF yields an unique output pattern for an input object. The present method works only with one object present in the input scene. The optical features extracted from the output pattern are shift, scale, and rotation invariant and are used as input to the neural network. The neural network is a multilayer feedforward net with back-propagation learning rule. Because of substantial reduction of the dimension of feature vectors provided by optical FMF, the small neural network is simply simulated in a personal computer. Optical experimental results are shown.
APA, Harvard, Vancouver, ISO, and other styles
42

Li, X. G., C. Ren, T. X. Zhang, Z. L. Zhu, and Z. G. Zhang. "UNMANNED AERIAL VEHICLE IMAGE MATCHING BASED ON IMPROVED RANSAC ALGORITHM AND SURF ALGORITHM." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W10 (February 7, 2020): 67–70. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w10-67-2020.

Full text
Abstract:
Abstract. A UAV image matching method based on RANSAC (Random Sample Consensus) algorithm and SURF (speeded up robust features) algorithm is proposed. The SURF algorithm is integrated with fast operation and good rotation invariance, scale invariance and illumination. The brightness is invariant and the robustness is good. The RANSAC algorithm can effectively eliminate the characteristics of mismatched point pairs. The pre-verification experiment and basic verification experiment are added to the RANSAC algorithm, which improves the rejection and running speed of the algorithm. The experimental results show that compared with the SURF algorithm, SIFT (Scale Invariant Feature Transform) algorithm and ORB (Oriented FAST and Rotated BRIEF) algorithm, the proposed algorithm is superior to other algorithms in terms of matching accuracy and matching speed, and the robustness is higher.
APA, Harvard, Vancouver, ISO, and other styles
43

KITAGAWA, Masamichi, and Ikuko SHIMIZU. "Memory Saving Feature Descriptor Using Scale and Rotation Invariant Patches around the Feature Ppoints." IEICE Transactions on Information and Systems E102.D, no. 5 (May 1, 2019): 1106–10. http://dx.doi.org/10.1587/transinf.2018edl8176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Cui, Song, Miaozhong Xu, Ailong Ma, and Yanfei Zhong. "Modality-Free Feature Detector and Descriptor for Multimodal Remote Sensing Image Registration." Remote Sensing 12, no. 18 (September 10, 2020): 2937. http://dx.doi.org/10.3390/rs12182937.

Full text
Abstract:
The nonlinear radiation distortions (NRD) among multimodal remote sensing images bring enormous challenges to image registration. The traditional feature-based registration methods commonly use the image intensity or gradient information to detect and describe the features that are sensitive to NRD. However, the nonlinear mapping of the corresponding features of the multimodal images often results in failure of the feature matching, as well as the image registration. In this paper, a modality-free multimodal remote sensing image registration method (SRIFT) is proposed for the registration of multimodal remote sensing images, which is invariant to scale, radiation, and rotation. In SRIFT, the nonlinear diffusion scale (NDS) space is first established to construct a multi-scale space. A local orientation and scale phase congruency (LOSPC) algorithm are then used so that the features of the images with NRD are mapped to establish a one-to-one correspondence, to obtain sufficiently stable key points. In the feature description stage, a rotation-invariant coordinate (RIC) system is adopted to build a descriptor, without requiring estimation of the main direction. The experiments undertaken in this study included one set of simulated data experiments and nine groups of experiments with different types of real multimodal remote sensing images with rotation and scale differences (including synthetic aperture radar (SAR)/optical, digital surface model (DSM)/optical, light detection and ranging (LiDAR) intensity/optical, near-infrared (NIR)/optical, short-wave infrared (SWIR)/optical, classification/optical, and map/optical image pairs), to test the proposed algorithm from both quantitative and qualitative aspects. The experimental results showed that the proposed method has strong robustness to NRD, being invariant to scale, radiation, and rotation, and the achieved registration precision was better than that of the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
45

Liu, Dan, Dawei Du, Libo Zhang, Tiejian Luo, Yanjun Wu, Feiyue Huang, and Siwei Lyu. "Scale Invariant Fully Convolutional Network: Detecting Hands Efficiently." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4344–51. http://dx.doi.org/10.1609/aaai.v33i01.33014344.

Full text
Abstract:
Existing hand detection methods usually follow the pipeline of multiple stages with high computation cost, i.e., feature extraction, region proposal, bounding box regression, and additional layers for rotated region detection. In this paper, we propose a new Scale Invariant Fully Convolutional Network (SIFCN) trained in an end-to-end fashion to detect hands efficiently. Specifically, we merge the feature maps from high to low layers in an iterative way, which handles different scales of hands better with less time overhead comparing to concatenating them simply. Moreover, we develop the Complementary Weighted Fusion (CWF) block to make full use of the distinctive features among multiple layers to achieve scale invariance. To deal with rotated hand detection, we present the rotation map to get rid of complex rotation and derotation layers. Besides, we design the multi-scale loss scheme to accelerate the training process significantly by adding supervision to the intermediate layers of the network. Compared with the state-of-the-art methods, our algorithm shows comparable accuracy and runs a 4.23 times faster speed on the VIVA dataset and achieves better average precision on Oxford hand detection dataset at a speed of 62.5 fps.
APA, Harvard, Vancouver, ISO, and other styles
46

Liu, Bang Chao, Shan Juan Xie, and Dong Sun Park. "Finger Vein Recognition Using Optimal Partitioning Uniform Rotation Invariant LBP Descriptor." Journal of Electrical and Computer Engineering 2016 (2016): 1–10. http://dx.doi.org/10.1155/2016/7965936.

Full text
Abstract:
As a promising biometric system, finger vein identification has been studied widely and many relevant researches have been proposed. However, it is hard to extract a satisfied finger vein pattern due to the various vein thickness, illumination, low contrast region, and noise existing. And most of the feature extraction algorithms rely on high-quality finger vein database and take a long time for a large dimensional feature vector. In this paper, we proposed two block selection methods which are based on the estimate of the amount of information in each block and the contribution of block location by looking at recognition rate of each block position to reduce feature extraction time and matching time. The specific approach is to find out some local finger vein areas with low-quality and noise, which will be useless for feature description. Local binary pattern (LBP) descriptors are proposed to extract the finger vein pattern feature. Two finger vein databases are taken to test our algorithm performance. Experimental results show that proposed block selection algorithms can reduce the feature vector dimensionality in a large extent.
APA, Harvard, Vancouver, ISO, and other styles
47

Shakoor, Mohammad Hossein, and Reza Boostani. "Extended Mapping Local Binary Pattern Operator for Texture Classification." International Journal of Pattern Recognition and Artificial Intelligence 31, no. 06 (March 30, 2017): 1750019. http://dx.doi.org/10.1142/s0218001417500197.

Full text
Abstract:
In this paper, an Extended Mapping Local Binary Pattern (EMLBP) method is proposed that is used for texture feature extraction. In this method, by extending nonuniform patterns a new mapping technique is suggested that extracts more discriminative features from textures. This new mapping is tested for some LBP operators such as CLBP, LBP, and LTP to improve the classification rate of them. The proposed approach is used for coding nonuniform patterns into more than one feature. The proposed method is rotation invariant and has all the positive points of previous approaches. By concatenating and joining two or more histograms significant improvement can be made for rotation invariant texture classification. The implementation of proposed mapping on Outex, UIUC and CUReT datasets shows that proposed method can improve the rate of classifications. Furthermore, the introduced mapping can increase the performance of any rotation invariant LBP, especially for large neighborhood. The most accurate result of the proposed technique has been obtained for CLBP. It is higher than that of some state-of-the-art LBP versions such as multiresolution CLBP and CLBC, DLBP, VZ_MR8, VZ_Joint, LTP, and LBPV.
APA, Harvard, Vancouver, ISO, and other styles
48

VYAS, VIBHA S., and PRITI P. REGE. "GEOMETRIC TRANSFORM INVARIANT TEXTURE ANALYSIS WITH MODIFIED CHEBYSHEV MOMENTS BASED ALGORITHM." International Journal of Image and Graphics 09, no. 04 (October 2009): 559–74. http://dx.doi.org/10.1142/s0219467809003587.

Full text
Abstract:
Texture based Geometric invariance, which comprises of rotation scale and translation (RST) invariant is finding application in various areas including industrial inspection, estimation of object range and orientation, shape analysis, satellite imaging, and medical diagnosis. Moments based techniques, apart from being computationally simple as compared to other RST invariant texture operators, are also robust in presence of noise. Zernike moments (ZM) based techniques are one of the well-established methods used for texture identification. As ZM are continuous moments, when discretization is done for implementation, errors are introduced. Error, calculated as difference between theoretically computed values and simulated values is proved to be prominent for fine textures. Therefore, a novel approach to detect RST invariant textures present in image is presented in this paper. This approach calculates discrete Chebyshev moments (CM) of log-polar transformed images to achieve rotation and scale invariance. The image is made translation invariant by shifting it to its centroid. The data is collected as samples from Brodatz and Vistex data sets. Zernike moments and its modifications, along with proposed scheme are applied to the same and Performance evaluation apart from RST invariance is noise sensitivity and redundancy. The performance is also compared with circular Mellin Feature extractors.
APA, Harvard, Vancouver, ISO, and other styles
49

Xia, Zhihua, Rui Lv, and Xingming Sun. "Rotation-invariant Weber pattern and Gabor feature for fingerprint liveness detection." Multimedia Tools and Applications 77, no. 14 (December 18, 2017): 18187–200. http://dx.doi.org/10.1007/s11042-017-5517-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Tao, Chao, Yuqi Tang, Chong Fan, and Zhengron Zou. "Hyperspectral Imagery Classification Based on Rotation-Invariant Spectral–Spatial Feature." IEEE Geoscience and Remote Sensing Letters 11, no. 5 (May 2014): 980–84. http://dx.doi.org/10.1109/lgrs.2013.2284007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography