Academic literature on the topic 'Image Processing and Feature Extraction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Image Processing and Feature Extraction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Image Processing and Feature Extraction"

1

Hema, Dr A., and R. Saravanakumar. "A Survey on Feature Extraction Technique in Image Processing." International Journal of Trend in Scientific Research and Development Volume-2, Issue-4 (2018): 448–51. http://dx.doi.org/10.31142/ijtsrd12937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zou, Ji, Chao Zhang, Zhongjing Ma, Lei Yu, Kaiwen Sun, and Tengfei Liu. "Image Feature Analysis and Dynamic Measurement of Plantar Pressure Based on Fusion Feature Extraction." Traitement du Signal 38, no. 6 (2021): 1829–35. http://dx.doi.org/10.18280/ts.380627.

Full text
Abstract:
Footprint recognition and parameter measurement are widely used in fields like medicine, sports, and criminal investigation. Some results have been achieved in the analysis of plantar pressure image features based on image processing. But the common algorithms of image feature extraction often depend on computer processing power and massive datasets. Focusing on the auxiliary diagnosis and treatment of foot rehabilitation of foot laceration patients, this paper explores the image feature analysis and dynamic measurement of plantar pressure based on fusion feature extraction. Firstly, the authors detailed the idea of extracting image features with a fusion algorithm, which integrates wavelet transform and histogram of oriented gradients (HOG) descriptor. Next, the plantar parameters were calculated based on plantar pressure images, and the measurement steps of plantar parameters were given. Finally, the feature extraction effect of the proposed algorithm was verified, and the measured results on plantar parameters were obtained through experiments.
APA, Harvard, Vancouver, ISO, and other styles
3

Wei, Zhenfeng, and Xiaohua Zhang. "Feature Extraction and Retrieval of Ecommerce Product Images Based on Image Processing." Traitement du Signal 38, no. 1 (2021): 181–90. http://dx.doi.org/10.18280/ts.380119.

Full text
Abstract:
The new retail is an industry featured by online ecommerce. One of the key techniques of the industry is the product identification based on image processing. This technique has an important business application value, because it is capable of improving the retrieval efficiency of products and the level of information supervision. To acquire high-level semantics of images and enhance the retrieval effect of products, this paper explores the feature extraction and retrieval of ecommerce product images based on image processing. The improved Fourier descriptor was innovatively into a metric learning-based product image feature extraction network, and the attention mechanism was introduced to realize accurate retrieval of product images. Firstly, the authors detailed how to acquire the product contour and the axis with minimum moment of inertia, and then extracted the shape feature of products. Next, a feature extraction network was established based on the metric learning supervision, which is capable of obtaining distinctive feature, and thus realized the extraction of distinctive and classification features of products. Finally, the authors expounded on the product image retrieval method based on cluster attention neural network. The effectiveness of our method was confirmed through experiments. The research results provide a reference for feature extraction and retrieval in other fields of image processing.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Tao, Zhipeng Li, Myungsoo Shin, Chunxia Wang, Wenli Song, and Lei Lui. "Feature Extraction Method of Snowboard Starting Action Using Vision Sensor Image Processing." Mobile Information Systems 2022 (January 19, 2022): 1–9. http://dx.doi.org/10.1155/2022/2829547.

Full text
Abstract:
There is a lot of noise in the snowboard starting action image, which leads to the low accuracy of snowboard starting action feature extraction. We propose a snowboard starting action feature extraction using visual sensor image processing. Firstly, the overlapping images are separated by laser fringe technology. After separation, the middle point of the image is taken as the feature point, and the interference factors are filtered by laser. Secondly, the three-dimensional model is established by using visual sensing image technology, the action feature images are input in the order of recognition, and all actions are reconstructed and assembled to complete the action feature extraction of snowboard. The interference factors are filtered by laser, the middle part of the action image is extracted according to the common features of multiple images, and its definition is described. The movement change and moving distance are used to count the most features and clarity. Finally, the edge recognition effect of snowboard starting action image and the action recognition effect under multiple complex images are taken as experimental indexes. The results show that the method has a good effect on image edge extraction, the extraction effect is as high as 95%, and the accuracy is as high as 2.1%. In addition, under multiple complex images, the action feature recognition rate is also high, which can prove that the method studied has better accuracy in snowboard starting action feature extraction.
APA, Harvard, Vancouver, ISO, and other styles
5

Ismaila, Folasade. M., O. Adeolu Afolabi, W. Oladimeji Ismaila, and Oluwaseun O. Alo. "Performance Evaluation of Selected Feature Extraction Techniques in Digital Face Image Processing." Performance Evaluation of Selected Feature Extraction Techniques in Digital Face Image Processing 9, no. 1 (2024): 9. https://doi.org/10.5281/zenodo.10670429.

Full text
Abstract:
Digital image processing is the use of computer algorithms to analyze digital images. Digital image processing, involves many processing stages of which feature extraction stage is important. Feature extraction involves reducing the number of resources required to describe a large set of data. However, choosing a feature extraction techniques is a problem because of their deficiencies. Thus, this paper presents a comparative performance analysis of selected feature extraction techniques in human face images. 90 face images were acquired with three different poses viz: normal, angry and laughing. The face images were first pre-processed and then subjected to selected feature extraction techniques (Local binary pattern, Principal component analysis, Gabor filter and Linear discriminant analysis). The extracted features were then classified using Backpropagation neural network. The results of recognition accuracy produced by Gabor filter, PCA, LDA and LBP at 0.76 threshold are 76.7%, 72.2%. 78.9% and 85.6%. Hence, it can be deduced that LBP performed the best among the four selected feature extraction techniques. Keywords:- Digital Image Processing, Feature Extraction, Local Binary Pattern, Principal Component Analysis, Gabor Filter, Linear Discriminant Analysis.
APA, Harvard, Vancouver, ISO, and other styles
6

APOSTOLESCU, Nicolae, and Dragos-Daniel ION-GUTA. "Image processing for feature detection and extraction." INCAS BULLETIN 16, no. 3 (2024): 3–18. http://dx.doi.org/10.13111/2066-8201.2024.16.3.1.

Full text
Abstract:
The present paper aims to conduct an experiment that compares different methods of detecting objects in images. Programs were developed to evaluate the efficiency of SURF, BRISK, MSER, and ORB object detection methods. Four static gray images with sufficiently different histograms were used. The experiment also highlighted the need for image preprocessing to improve feature extraction and detection. Thus, a programmed method for adjusting pixel groups was developed. This method proved useful when one of the listed algorithms failed to detect the object in the original image, but succeeded after adjustment. The effectiveness of detection methods and the evaluation of their performance depend on the application, image preparation, algorithms used, and their implementation. Results of the detection methods were presented numerically (similarities, gradients, distances, etc.) and graphically.
APA, Harvard, Vancouver, ISO, and other styles
7

Vega-Rodriguez, M. A. "Review: Feature Extraction and Image Processing." Computer Journal 47, no. 2 (2004): 271–72. http://dx.doi.org/10.1093/comjnl/47.2.271-a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Jucui, Mingzhi Li, Anton Dziatkovskii, Uladzimir Hryneuski, and Aleksandra Krylova. "Research on contour feature extraction method of multiple sports images based on nonlinear mechanics." Nonlinear Engineering 11, no. 1 (2022): 347–54. http://dx.doi.org/10.1515/nleng-2022-0037.

Full text
Abstract:
Abstract This article solves the issue of long extraction time and low extraction accuracy in traditional moving image contour feature extraction methods. Here authors have explored deformable active contour model to research the image processing technology in scientific research and the application of multiple sports and the method. A B-spline active contour model based on dynamic programming method is proposed in this article. This article proposes a method of using it to face image processing and extracting computed tomography (CT) image data to establish a three-dimensional model. The Lyapunov exponent, correlation dimension and approximate entropy of the nonlinear dynamics algorithm were used to extract the features of eight types of motor imagination electroencephalogram (EEG) signals. The results show that the success rate of pose reconstruction is more than 97% when the contour extraction quality is relatively ideal. The method is also robust to image noise, and the success rate of pose reconstruction can reach 94% when the video image has large noise. The execution efficiency is sub-linear, which can basically meet the requirements of real-time processing in video-based human posture reconstruction. The proposed method has a low error rate in the calculation of curvature features, effectively reduces the time for extracting contour features of moving images, and improves the accuracy of feature information extraction.
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Zhe, Xiao Han, Liya Wang, Tongyi Zhu, and Futian Yuan. "Feature Extraction and Image Retrieval of Landscape Images Based on Image Processing." Traitement du Signal 37, no. 6 (2020): 1009–18. http://dx.doi.org/10.18280/ts.370613.

Full text
Abstract:
Facing the existing digital image libraries on landscape, researchers need to urgently solve a challenging problem: how to realize rational management and accurate retrieval of landscape images that contain feature information like hierarchy, layout, color system, and color matching. For accurate organization and labeling of landscape Images, this paper presents a novel method for feature extraction and image retrieval of landscape images based on image processing. Firstly, a color quantization process was designed for landscape images, and used to analyze the color composition and color space pattern (CSP) of such images. Next, the existing methods, which are suitable for the extraction of color features from landscape Images, were briefly reviewed, and the basic flows of our improved algorithm and division method of landscape color blocks (LCBs) were explained. Finally, the retrieval performance of landscape images was improved by matching of weighted color blocks of regional landscape, based on the multi-dimensional color eigenvectors of landscape image. The experimental results demonstrate the effectiveness of our algorithm. The research results shed light on the feature extraction from other types of color images.
APA, Harvard, Vancouver, ISO, and other styles
10

Rogayah, Rogayah, Waliya Rahmawanti, and Nur Azizah. "Colour-Based Extraction Methods for the Classification of Breast Milk (ASI)." CCIT Journal 14, no. 1 (2021): 21–27. http://dx.doi.org/10.33050/ccit.v14i1.966.

Full text
Abstract:
The development of cellular devices makes accessing information in the form of text or images more easier. In line with the growing field of computer vision, various processes in image/image processing continue to increase. Image processing can be done by increasing image quality (image enhancement) and image recovery (image restoration). Feature extraction is divided into three types, namely feature form extraction, texture feature extraction, and color feature extraction. The application of color-based feature extraction methods has been widely used by researchers in the process of classification of various objects. This paper aims to review the technology that can be applied to image processing in a CBIR system with the object of breast milk so that it can measure the quality of breast milk based on its color.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Image Processing and Feature Extraction"

1

Ljumić, Elvis. "Image feature extraction using fuzzy morphology." Diss., Online access via UMI:, 2007.

Find full text
Abstract:
Thesis (Ph. D.)--State University of New York at Binghamton, Department of Systems Science and Industrial Engineering, Thomas J. Watson School of Engineering and Applied Science, 2007.<br>Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
2

Lim, Suryani. "Feature extraction, browsing and retrieval of images." Monash University, School of Computing and Information Technology, 2005. http://arrow.monash.edu.au/hdl/1959.1/9677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lorentzon, Matilda. "Feature Extraction for Image Selection Using Machine Learning." Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-142095.

Full text
Abstract:
During flights with manned or unmanned aircraft, continuous recording can result in avery high number of images to analyze and evaluate. To simplify image analysis and tominimize data link usage, appropriate images should be suggested for transfer and furtheranalysis. This thesis investigates features used for selection of images worthy of furtheranalysis using machine learning. The selection is done based on the criteria of havinggood quality, salient content and being unique compared to the other selected images.The investigation is approached by implementing two binary classifications, one regardingcontent and one regarding quality. The classifications are made using support vectormachines. For each of the classifications three feature extraction methods are performedand the results are compared against each other. The feature extraction methods used arehistograms of oriented gradients, features from the discrete cosine transform domain andfeatures extracted from a pre-trained convolutional neural network. The images classifiedas both good and salient are then clustered based on similarity measures retrieved usingcolor coherence vectors. One image from each cluster is retrieved and those are the resultingimages from the image selection. The performance of the selection is evaluated usingthe measures precision, recall and accuracy. The investigation showed that using featuresextracted from the discrete cosine transform provided the best results for the quality classification.For the content classification, features extracted from a convolutional neuralnetwork provided the best results. The similarity retrieval showed to be the weakest partand the entire system together provides an average accuracy of 83.99%.
APA, Harvard, Vancouver, ISO, and other styles
4

Nilsson, Mikael. "On feature extraction and classification in speech and image processing /." Karlskrona : Department of Signal Processing, School of Engineering, Blekinge Institute of Technology, 2007. http://www.bth.se/fou/forskinfo.nsf/allfirst2/fcbe16e84a9ba028c12573920048bce9?OpenDocument.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Kai-wah. "Mesh denoising and feature extraction from point cloud data." Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B42664330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Kai-wah, and 李啟華. "Mesh denoising and feature extraction from point cloud data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B42664330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rees, Stephen John. "Feature extraction and object recognition using conditional morphological operators." Thesis, University of South Wales, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shi, Qiquan. "Low rank tensor decomposition for feature extraction and tensor recovery." HKBU Institutional Repository, 2018. https://repository.hkbu.edu.hk/etd_oa/549.

Full text
Abstract:
Feature extraction and tensor recovery problems are important yet challenging, particularly for multi-dimensional data with missing values and/or noise. Low-rank tensor decomposition approaches are widely used for solving these problems. This thesis focuses on three common tensor decompositions (CP, Tucker and t-SVD) and develops a set of decomposition-based approaches. The proposed methods aim to extract low-dimensional features from complete/incomplete data and recover tensors given partial and/or grossly corrupted observations.;Based on CP decomposition, semi-orthogonal multilinear principal component analysis (SO-MPCA) seeks a tensor-to-vector projection that maximizes the captured variance with the orthogonality constraint imposed in only one mode, and it further integrates the relaxed start strategy (SO-MPCA-RS) to achieve better feature extraction performance. To directly obtain the features from incomplete data, low-rank CP and Tucker decomposition with feature variance maximization (TDVM-CP and TDVM-Tucker) are proposed. TDVM methods explore the relationship among tensor samples via feature variance maximization, while estimating the missing entries via low-rank CP and Tucker approximation, leading to informative features extracted directly from partial observations. TDVM-CP extracts low-dimensional vector features viewing the weight vectors as features and TDVM-Tucker learns low-dimensional tensor features viewing the core tensors as features. TDVM methods can be generalized to other variants based on other tensor decompositions. On the other hand, this thesis solves the missing data problem by introducing low-rank matrix/tensor completion methods, and also contributes to automatic rank estimation. Rank-one matrix decomposition coupled with L1-norm regularization (L1MC) addresses the matrix rank estimation problem. With the correct estimated rank, L1MC refines its model without L1-norm regularization (L1MC-RF) and achieve optimal recovery results given enough observations. In addition, CP-based nuclear norm regularized orthogonal CP decomposition (TREL1) solves the challenging CP- and Tucker-rank estimation problems. The estimated rank can improve the tensor completion accuracy of existing decomposition-based methods. Furthermore, tensor singular value decomposition (t-SVD) combined with tensor nuclear norm (TNN) regularization (ARE_TNN) provides automatic tubal-rank estimation. With the accurate tubal-rank determination, ARE_TNN relaxes its model without the TNN constraint (TC-ARE) and results in optimal tensor completion under mild conditions. In addition, ARE_TNN refines its model by explicitly utilizing its determined tubal-rank a priori and then successfully recovers low-rank tensors based on incomplete and/or grossly corrupted observations (RTC-ARE: robust tensor completion/RTPCA-ARE: robust tensor principal component analysis).;Experiments and evaluations are presented and analyzed using synthetic data and real-world images/videos in machine learning, computer vision, and data mining applications. For feature extraction, the experimental results of face and gait recognition show that SO-MPCA-RS achieves the best overall performance compared with competing algorithms, and its relaxed start strategy is also effective for other CP-based PCA methods. In the applications of face recognition, object/action classification, and face/gait clustering, TDVM methods not only stably yield similar good results under various multi-block missing settings and different parameters in general, but also outperform the competing methods with significant improvements. For matrix/tensor rank estimation and recovery, L1MC-RF efficiently estimates the true rank and exactly recovers the incomplete images/videos under mild conditions, and outperforms the state-of-the-art algorithms on the whole. Furthermore, the empirical evaluations show that TREL1 correctly determines the CP-/Tucker- ranks well, given sufficient observed entries, which consistently improves the recovery performance of existing decomposition-based tensor completion. The t-SVD recovery methods TC-ARE, RTPCA-ARE, and RTC-ARE not only inherit the ability of ARE_TNN to achieve accurate rank estimation, but also achieve good performance in the tasks of (robust) image/video completion, video denoising, and background modeling. This outperforms the state-of-the-art methods in all cases we have tried so far with significant improvements.
APA, Harvard, Vancouver, ISO, and other styles
9

Marrugo, Hernández Andrés G. (Andrés Guillermo). "Comprehensive retinal image analysis: image processing and feature extraction techniques oriented to the clinical task." Doctoral thesis, Universitat Politècnica de Catalunya, 2013. http://hdl.handle.net/10803/134698.

Full text
Abstract:
Medical digital imaging has become a key element of modern health care procedures. It provides a visual documentation, a permanent record for the patients, and most importantly the ability to extract information about many diseases. Ophthalmology is a field that is heavily dependent on the analysis of digital images because they can aid in establishing an early diagnosis even before the first symptoms appear. This dissertation contributes to the digital analysis of such images and the problems that arise along the imaging pipeline, a field that is commonly referred to as retinal image analysis. We have dealt with and proposed solutions to problems that arise in retinal image acquisition and longitudinal monitoring of retinal disease evolution. Specifically, non-uniform illumination, poor image quality, automated focusing, and multichannel analysis. However, there are many unavoidable situations in which images of poor quality, like blurred retinal images because of aberrations in the eye, are acquired. To address this problem we have proposed two approaches for blind deconvolution of blurred retinal images. In the first approach, we consider the blur to be space-invariant and later in the second approach we extend the work and propose a more general space-variant scheme. For the development of the algorithms we have built preprocessing solutions that have enabled the extraction of retinal features of medical relevancy, like the segmentation of the optic disc and the detection and visualization of longitudinal structural changes in the retina. Encouraging experimental results carried out on real retinal images coming from the clinical setting demonstrate the applicability of our proposed solutions.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Yuanxun. "Radar signature prediction and feature extraction using advanced signal processing techniques /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Image Processing and Feature Extraction"

1

S, Aguado Alberto, ed. Feature extraction and image processing. Newnes, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nixon, Mark. Feature Extraction & Image Processing for Computer Vision. 3rd ed. Elsevier Science, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rand, Robert S. Texture analysis and cartographic feature extraction. U.S. Army Corps of Engineers, Engineer Topographic Laboratories, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

United States. National Aeronautics and Space Administration., ed. 3D feature extraction for unstructured grids. National Aeronautics and Space Administration, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

A, Landgrebe D., and United States. National Aeronautics and Space Administration., eds. On-line object feature extraction for multispectral scene representation. School of Electrical Engineering, Purdue University ; [Washington, DC, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

A, Landgrebe D., and United States. National Aeronautics and Space Administration., eds. On-line object feature extraction for multispectral scene representation. School of Electrical Engineering, Purdue University ; [Washington, DC, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Puig, Luis. Omnidirectional Vision Systems: Calibration, Feature Extraction and 3D Information. Springer London, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chaki, Jyotismita, and Nilanjan Dey. Image Color Feature Extraction Techniques. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-5761-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kayargadde, Vishwakumara. Feature extraction for image quality prediction. Eindhoven University of Technology, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Samantaray, Aswini Kumar, and Amol D. Rahulkar. Feature Extraction in Medical Image Retrieval. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-57279-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Image Processing and Feature Extraction"

1

Awcock, G. J., and R. Thomas. "Feature Extraction." In Applied Image Processing. Macmillan Education UK, 1995. http://dx.doi.org/10.1007/978-1-349-13049-8_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tschumperlé, David, Christophe Tilmant, and Vincent Barra. "Feature Extraction." In Digital Image Processing with C++. CRC Press, 2023. http://dx.doi.org/10.1201/9781003323693-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Suganya, R., S. Rajaram, and A. Sheik Abdullah. "Texture Feature Extraction." In Big Data in Medical Image Processing. CRC Press, 2018. http://dx.doi.org/10.1201/b22456-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Umbaugh, Scott E. "Feature Extraction and Analysis." In Digital Image Processing and Analysis, 4th ed. CRC Press, 2022. http://dx.doi.org/10.1201/9781003221135-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Traina, Agma J. M., Caetano Traina, André G. R. Balan, et al. "Feature Extraction and Selection for Decision Making." In Biomedical Image Processing. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15816-2_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Coleman, Sonya, Bryan Scotney, and Bryan Gardiner. "Biologically Motivated Feature Extraction." In Image Analysis and Processing – ICIAP 2011. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24085-0_62.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Camps-Valls, Gustavo, Devis Tuia, Luis Gómez-Chova, Sandra Jiménez, and Jesús Malo. "Remote Sensing Feature Selection and Extraction." In Remote Sensing Image Processing. Springer International Publishing, 2012. http://dx.doi.org/10.1007/978-3-031-02247-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gong, Shengrong, Chunping Liu, Yi Ji, Baojiang Zhong, Yonggang Li, and Husheng Dong. "Feature Extraction and Representation." In Advanced Image and Video Processing Using MATLAB. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77223-3_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tamililakkiya, V., and K. Vani. "Feature Extraction from Lunar Images." In Advances in Digital Image Processing and Information Technology. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24055-3_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chessa, Manuela, and Fabio Solari. "Local Feature Extraction in Log-Polar Images." In Image Analysis and Processing — ICIAP 2015. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23231-7_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Image Processing and Feature Extraction"

1

Arslan, Janan, Henri Chhoa, Ines Khemir, et al. "PyCellMech: a shape-based feature extraction pipeline for use in medical and biological studies." In Image Processing, edited by Olivier Colliot and Jhimli Mitra. SPIE, 2025. https://doi.org/10.1117/12.3047205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Manohar, Ashish, Dulam Sneha, Kunal Sakhuja, Tanya R. Dwivedii, and C. Gururaj. "Drone based image processing through feature extraction." In 2017 2nd IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT). IEEE, 2017. http://dx.doi.org/10.1109/rteict.2017.8256577.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Zuyuan, Ruedi Boesch, and Christian Ginzler. "Feature Extraction for Forest Inventory." In 2008 Congress on Image and Signal Processing. IEEE, 2008. http://dx.doi.org/10.1109/cisp.2008.736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Tiejun, Yanli Wang, Zhe Chen, and Renxiang Wang. "Linear feature extraction for infrared image." In Multispectral Image Processing and Pattern Recognition, edited by Tianxu Zhang, Bir Bhanu, and Ning Shu. SPIE, 2001. http://dx.doi.org/10.1117/12.441474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jaiswal, Rachana, and Srikant Satarkar. "Biometric Foetal Contour Extraction using Hybrid Level Set." In 6th International Conference on Signal and Image Processing (SIGI 2020). AIRCC Publishing Corporation, 2020. http://dx.doi.org/10.5121/csit.2020.102002.

Full text
Abstract:
In medical imaging, accurate anatomical structure extraction is important for diagnosis and therapeutic interventional planning. So, for easier, quicker and accurate diagnosis of medical images, image processing technologies may be employed in analysis and feature extraction of medical images. In this paper, some modifications to level set algorithm are made and modified algorithm is used for extracting contour of foetal objects in an image. The proposed approach is applied on foetal ultrasound images. In traditional approach, foetal parameters are extracted manually from ultrasound images. Due to lack of consistency and accuracy of manual measurements, an automatic technique is highly desirable to obtain foetal biometric measurements. This proposed approach is based on global &amp; local region information for foetal contour extraction from ultrasonic images. The primary goal of this research is to provide a new methodology to aid the analysis and feature extraction from foetal images.
APA, Harvard, Vancouver, ISO, and other styles
6

Tang, Dejun, Weishi Zhang, Xiaolu Qu, and Dujuan Wang. "A feature fusion method for feature extraction." In Fourth International Conference on Digital Image Processing (ICDIP 2012), edited by Mohamed Othman, Sukumar Senthilkumar, and Xie Yi. SPIE, 2012. http://dx.doi.org/10.1117/12.946076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, Jun, and Dapeng Wu. "Ripplet-II transform for feature extraction." In Visual Communications and Image Processing 2010, edited by Pascal Frossard, Houqiang Li, Feng Wu, Bernd Girod, Shipeng Li, and Guo Wei. SPIE, 2010. http://dx.doi.org/10.1117/12.863013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kokiopoulou, Effrosyni, and Pascal Frossard. "Pattern Detection by Distributed Feature Extraction." In 2006 International Conference on Image Processing. IEEE, 2006. http://dx.doi.org/10.1109/icip.2006.313119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wijethilake, Navodini, Steve Connor, Anna Oviedova, et al. "A clinical guideline driven automated linear feature extraction for vestibular schwannoma." In Image Processing, edited by Olivier Colliot and Jhimli Mitra. SPIE, 2024. http://dx.doi.org/10.1117/12.3006526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ankışhan, Haydar, and Derya Yılmaz. "Feature Extraction and Classification of Snore Related Sounds." In Signal and Image Processing. ACTAPRESS, 2011. http://dx.doi.org/10.2316/p.2011.759-095.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Image Processing and Feature Extraction"

1

Asari, Vijayan, Paheding Sidike, Binu Nair, Saibabu Arigela, Varun Santhaseelan, and Chen Cui. PR-433-133700-R01 Pipeline Right-of-Way Automated Threat Detection by Advanced Image Analysis. Pipeline Research Council International, Inc. (PRCI), 2015. http://dx.doi.org/10.55274/r0010891.

Full text
Abstract:
A novel algorithmic framework for the robust detection and classification of machinery threats and other potentially harmful objects intruding onto a pipeline right-of-way (ROW) is designed from three perspectives: visibility improvement, context-based segmentation, and object recognition/classification. In the first part of the framework, an adaptive image enhancement algorithm is utilized to improve the visibility of aerial imagery to aid in threat detection. In this technique, a nonlinear transfer function is developed to enhance the processing of aerial imagery with extremely non-uniform lighting conditions. In the second part of the framework, the context-based segmentation is developed to eliminate regions from imagery that are not considered to be a threat to the pipeline. Context based segmentation makes use of a cascade of pre-trained classifiers to search for regions that are not threats. The context based segmentation algorithm accelerates threat identification and improves object detection rates. The last phase of the framework is an efficient object detection model. Efficient object detection �follows a three-stage approach which includes extraction of the local phase in the image and the use of local phase characteristics to locate machinery threats. The local phase is an image feature extraction technique which partially removes the lighting variance and preserves the edge information of the object. Multiple orientations of the same object are matched and the correct orientation is selected using feature matching by histogram of local phase in a multi-scale framework. The classifier outputs locations of threats to pipeline.�The advanced automatic image analysis system is intended to be capable of detecting construction equipment along the ROW of pipelines with a very high degree of accuracy in comparison with manual threat identification by a human analyst. �
APA, Harvard, Vancouver, ISO, and other styles
2

Blundell, S. Tutorial : the DEM Breakline and Differencing Analysis Tool—step-by-step workflows and procedures for effective gridded DEM analysis. Engineer Research and Development Center (U.S.), 2022. http://dx.doi.org/10.21079/11681/46085.

Full text
Abstract:
The DEM Breakline and Differencing Analysis Tool is the result of a multi-year research effort in the analysis of digital elevation models (DEMs) and the extraction of features associated with breaklines identified on the DEM by numerical analysis. Developed in the ENVI/IDL image processing application, the tool is designed to serve as an aid to research in the investigation of DEMs by taking advantage of local variation in the height. A set of specific workflow exercises is described as applied to a diverse set of four sample DEMs. These workflows instruct the user in applying the tool to extract and analyze features associated with terrain, vegetative canopy, and built structures. Optimal processing parameter choices, subject to user modification, are provided along with sufficient explanation to train the user in elevation model analysis through the creation of customized output overlays.
APA, Harvard, Vancouver, ISO, and other styles
3

Jagler, Karl B. Wavelet Signal Processing for Transient Feature Extraction. Defense Technical Information Center, 1992. http://dx.doi.org/10.21236/ada250519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ling, Hao. Radar Image Enhancement, Feature Extraction and Motion Compensation Using Joint Time-Frequency Techniques. Defense Technical Information Center, 2001. http://dx.doi.org/10.21236/ada390630.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hao, Ling. Annual Report on Radar Image Enhancement, Feature Extraction and Motion Compensation Using Joint Time-Frequency Techniques. Defense Technical Information Center, 2000. http://dx.doi.org/10.21236/ada377783.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Trujillo, Sharon, Zachary Parks, and Lakshman Prasad. Advanced Study - Change, Anomaly and Feature Extraction with RADIUS for Image-based ISR: CRADA Final Report. Office of Scientific and Technical Information (OSTI), 2016. http://dx.doi.org/10.2172/1330824.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Trujillo, Sharon, Zachary Parks, and Lakshman Prasad. Change, Anomaly and Feature Extraction with RADIUS for Image-based ISR (Intelligence, Surveillance, and Reconnaissance): CRADA Final Report. Office of Scientific and Technical Information (OSTI), 2016. http://dx.doi.org/10.2172/1330811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Blundell, S. Micro-terrain and canopy feature extraction by breakline and differencing analysis of gridded elevation models : identifying terrain model discontinuities with application to off-road mobility modeling. Engineer Research and Development Center (U.S.), 2021. http://dx.doi.org/10.21079/11681/40185.

Full text
Abstract:
Elevation models derived from high-resolution airborne lidar scanners provide an added dimension for identification and extraction of micro-terrain features characterized by topographic discontinuities or breaklines. Gridded digital surface models created from first-return lidar pulses are often combined with lidar-derived bare-earth models to extract vegetation features by model differencing. However, vegetative canopy can also be extracted from the digital surface model alone through breakline analysis by taking advantage of the fine-scale changes in slope that are detectable in high-resolution elevation models of canopy. The identification and mapping of canopy cover and micro-terrain features in areas of sparse vegetation is demonstrated with an elevation model for a region of western Montana, using algorithms for breaklines, elevation differencing, slope, terrain ruggedness, and breakline gradient direction. These algorithms were created at the U.S. Army Engineer Research Center – Geospatial Research Laboratory (ERDC-GRL) and can be accessed through an in-house tool constructed in the ENVI/IDL environment. After breakline processing, products from these algorithms are brought into a Geographic Information System as analytical layers and applied to a mobility routing model, demonstrating the effect of breaklines as obstacles in the calculation of optimal, off-road routes. Elevation model breakline analysis can serve as significant added value to micro-terrain feature and canopy mapping, obstacle identification, and route planning.
APA, Harvard, Vancouver, ISO, and other styles
9

Ferdaus, Md Meftahul, Mahdi Abdelguerfi, Elias Ioup, et al. KANICE : Kolmogorov-Arnold networks with interactive convolutional elements. Engineer Research and Development Center (U.S.), 2025. https://doi.org/10.21079/11681/49791.

Full text
Abstract:
We introduce KANICE, a novel neural architecture that combines Convolutional Neural Networks (CNNs) with Kolmogorov-Arnold Network (KAN) principles. KANICE integrates Interactive Convolutional Blocks (ICBs) and KAN linear layers into a CNN framework. This leverages KANs’ universal approximation capabilities and ICBs’ adaptive feature learning. KANICE captures complex, non-linear data relationships while enabling dynamic, context-dependent feature extraction based on the Kolmogorov-Arnold representation theorem. We evaluated KANICE on four datasets: MNIST, Fashion-MNIST, EMNIST, and SVHN, comparing it against standard CNNs, CNN-KAN hybrids, and ICB variants. KANICE consistently outperformed baseline models, achieving 99.35% accuracy on MNIST and 90.05% on the SVHN dataset. Furthermore, we introduce KANICE-mini, a compact variant designed for efficiency. A comprehensive ablation study demonstrates that KANICE-mini achieves comparable performance to KANICE with significantly fewer parameters. KANICE-mini reached 90.00% accuracy on SVHN with 2,337,828 parameters, compared to KAN-ICE’s 25,432,000. This study highlights the potential of KAN-based architectures in balancing performance and computational efficiency in image classification tasks. Our work contributes to research in adaptive neural networks, integrates mathematical theorems into deep learning architectures, and explores the trade-offs between model complexity and performance, advancing computer vision and pattern recognition. The source code for this paper is publicly accessible through our GitHub repository (https://github.com/mferdaus/kanice).
APA, Harvard, Vancouver, ISO, and other styles
10

Mbani, Benson, Timm Schoening, and Jens Greinert. Automated and Integrated Seafloor Classification Workflow (AI-SCW). GEOMAR, 2023. http://dx.doi.org/10.3289/sw_2_2023.

Full text
Abstract:
The Automated and Integrated Seafloor Classification Workflow (AI-SCW) is a semi-automated underwater image processing pipeline that has been customized for use in classifying the seafloor into semantic habitat categories. The current implementation has been tested against a sequence of underwater images collected by the Ocean Floor Observation System (OFOS), in the Clarion-Clipperton Zone of the Pacific Ocean. Despite this, the workflow could also be applied to images acquired by other platforms such as an Autonomous Underwater Vehicle (AUV), or Remotely Operated Vehicle (ROV). The modules in AI-SCW have been implemented using the python programming language, specifically using libraries such as scikit-image for image processing, scikit-learn for machine learning and dimensionality reduction, keras for computer vision with deep learning, and matplotlib for generating visualizations. Therefore, AI-SCW modularized implementation allows users to accomplish a variety of underwater computer vision tasks, which include: detecting laser points from the underwater images for use in scale determination; performing contrast enhancement and color normalization to improve the visual quality of the images; semi-automated generation of annotations to be used downstream during supervised classification; training a convolutional neural network (Inception v3) using the generated annotations to semantically classify each image into one of pre-defined seafloor habitat categories; evaluating sampling strategies for generation of balanced training images to be used for fitting an unsupervised k-means classifier; and visualization of classification results in both feature space view and in map view geospatial co-ordinates. Thus, the workflow is useful for a quick but objective generation of image-based seafloor habitat maps to support monitoring of remote benthic ecosystems.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography