To see the other types of publications on this topic, follow the link: Binary vector quantization (BVQ).

Journal articles on the topic 'Binary vector quantization (BVQ)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Binary vector quantization (BVQ).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Chen, Liquan, Zhaofa Chen, Tianyu Lu, and Aiqun Hu. "A Physical Layer Key Generation Scheme Based on Deep Learning Compensation and Balanced Vector Quantization." Security and Communication Networks 2023 (April 8, 2023): 1–14. http://dx.doi.org/10.1155/2023/4911338.

Full text
Abstract:
Channel reciprocity is the foundation for physical layer key generation, which is influenced by noise, hardware impairments, and synchronization offsets. Weak channel reciprocity will result in a high key disagreement rate (KDR). The existing solutions for improving channel reciprocity cannot achieve satisfactory performance improvements. Furthermore, the existing quantization algorithms generally use one-dimensional channel features to quantize and generate secret keys, which cannot fully utilize channel information. The multidimensional vector quantization technique also needs to improve in terms of randomness and time complexity. This paper proposes a physical layer key generation scheme based on deep learning and balanced vector quantization. Specifically, we build a channel reciprocity compensation network (CRCNet) to learn the mapping relationship between Alice and Bob’s channel measurements. Alice compensates for channel measurements via a trained CRCNet to reduce channel measurement errors between legitimate users and enhance channel reciprocity. We also propose a balanced vector quantization algorithm based on integer linear programming (ILP-BVQ). ILP-BVQ reduces the time complexity of quantization on the basis of ensuring key randomness and a low KDR. Simulation results showed that the proposed CRCNet performs better in terms of channel reciprocity and KDR, while the proposed ILP-BVQ algorithm improves time consumption and key randomness.
APA, Harvard, Vancouver, ISO, and other styles
2

Ku, Ning-Yun, Shun-Chieh Chang, and Sha-Hwa Hwang. "Binary Search Vector Quantization." AASRI Procedia 8 (2014): 112–17. http://dx.doi.org/10.1016/j.aasri.2014.08.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kwak, Nae Joung, Soung Pil Ryu, Heak Bong Kwon, and Jae Hyeong Ahn. "The Improved Binary Tree Vector Quantization Using Spatial Sensitivity of HVS." Key Engineering Materials 277-279 (January 2005): 254–58. http://dx.doi.org/10.4028/www.scientific.net/kem.277-279.254.

Full text
Abstract:
In this paper, we proposed an improved binary tree vector quantization in special consideration of the area of spatial sensitivity which is an important characteristic of the human visual system. We regarded spatial sensitivity as a function of the human visual system, which works using variations of the three primary colors in blocks of input images. In addition, we applied the weight derived from HVS spatial sensitivity to the process of splitting nodes using eigenvectors in binary tree vector quantization. The test results showed that the proposed method provided better visual quality and greater PSNR than conventional methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Xiaolin Wu. "Optimal binary vector quantization via enumeration of covering codes." IEEE Transactions on Information Theory 43, no. 2 (1997): 638–45. http://dx.doi.org/10.1109/18.556119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Journal, Baghdad Science. "An algorithm for binary codebook design based on the average bitmap replacement error (ABPRE)." Baghdad Science Journal 8, no. 2 (2011): 684–88. http://dx.doi.org/10.21123/bsj.8.2.684-688.

Full text
Abstract:
In this paper, an algorithm for binary codebook design has been used in vector quantization technique, which is used to improve the acceptability of the absolute moment block truncation coding (AMBTC) method. Vector quantization (VQ) method is used to compress the bitmap (the output proposed from the first method (AMBTC)). In this paper, the binary codebook can be engender for many images depending on randomly chosen to the code vectors from a set of binary images vectors, and this codebook is then used to compress all bitmaps of these images. The chosen of the bitmap of image in order to compress it by using this codebook based on the criterion of the average bitmap replacement error (ABPRE). This paper is suitable to reduce bit rates (increase compression ratios) with little reduction of performance (PSNR).
APA, Harvard, Vancouver, ISO, and other styles
6

Hameed, Maha A. "An algorithm for binary codebook design based on the average bitmap replacement error (ABPRE)." Baghdad Science Journal 8, no. 2 (2011): 684–88. http://dx.doi.org/10.21123/bsj.2011.8.2.684-688.

Full text
Abstract:
In this paper, an algorithm for binary codebook design has been used in vector quantization technique, which is used to improve the acceptability of the absolute moment block truncation coding (AMBTC) method. Vector quantization (VQ) method is used to compress the bitmap (the output proposed from the first method (AMBTC)). In this paper, the binary codebook can be engender for many images depending on randomly chosen to the code vectors from a set of binary images vectors, and this codebook is then used to compress all bitmaps of these images. The chosen of the bitmap of image in order to compress it by using this codebook based on the criterion of the average bitmap replacement error (ABPRE). This paper is suitable to reduce bit rates (increase compression ratios) with little reduction of performance (PSNR).
APA, Harvard, Vancouver, ISO, and other styles
7

Lei, Shi. "Deburr Algorithm of Binary Image Based on Outline Trace." Advanced Materials Research 811 (September 2013): 422–25. http://dx.doi.org/10.4028/www.scientific.net/amr.811.422.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sari, Jayanti Yusmah, and Rizal Adi Saputra. "Pengenalan Finger Vein Menggunakan Local Line Binary Pattern dan Learning Vector Quantization." Jurnal ULTIMA Computing 9, no. 2 (2018): 52–57. http://dx.doi.org/10.31937/sk.v9i2.790.

Full text
Abstract:
This research proposes finger vein recognition system using Local Line Binary Pattern (LLBP) method and Learning Vector Quantization (LVQ). LLBP is is the advanced feature extraction method of Local Binary Pattern (LBP) method that uses a combination of binary values from neighborhood pixels to form features of an image. The straight-line shape of LLBP can extract robust features from the images with unclear veins, it is more suitable to capture the pattern of vein in finger vein image. At the recognition stage, LVQ is used as a classification method to improve recognition accuracy, which has been shown in earlier studies to show better results than other classifier methods. The three main stages in this research are preprocessing, feature extraction using LLBP method and recognition using LVQ. The proposed methodology has been tested on the SDUMLA-HMT finger vein image database from Shandong University. The experiment shows that the proposed methodology can achieve accuracy up to 90%.
 Index Terms—finger vein recognition, Learning Vector Quantization, LLBP, Local Line Binary Pattern, LVQ.
APA, Harvard, Vancouver, ISO, and other styles
9

DAVIGNON, ANDRÉ. "Block classification scheme using binary vector quantization for image coding." International Journal of Electronics 68, no. 5 (1990): 667–73. http://dx.doi.org/10.1080/00207219008921210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hidayat, Erwin Yudi, and Muhammad Farhan Radiffananda. "Pengenalan Tanda Tangan Menggunakan Learning Vector Quantization dan Ekstraksi Fitur Local Binary Pattern." CogITo Smart Journal 5, no. 2 (2019): 123. http://dx.doi.org/10.31154/cogito.v5i2.180.123-136.

Full text
Abstract:
Tanda tangan merupakan salah satu biometrik pada karakteristik perilaku yang digunakan untuk mengenali seseorang sebagai sistem identifikasi. Meskipun unik, banyak terjadi kasus tanda tangan yang disalahgunakan dengan cara dipalsukan. Tidak mudah mengenali tanda tangan yang palsu dengan tanda tangan asli. Penelitian ini menerapkan algoritma Learning Vector Quantization, deteksi tepi Sobel, dan ekstraksi fitur Local Binary Pattern untuk mengidentifikasi tanda tangan. Hasil penelitian menunjukkan, jumlah data citra, iterasi, dan learning rate mempengaruhi akurasi dan waktu proses identifikasi. Dari percobaan yang dilakukan pada parameter yang berbeda-beda, akurasi yang didapat adalah 68% pada data latih dan pada data uji sebesar 54,6%.Kata kunci—identifikasi, Learning Vector Quantization, tanda tangan, pengenalan pola
APA, Harvard, Vancouver, ISO, and other styles
11

LEBRUN, GILLES, CHRISTOPHE CHARRIER, OLIVIER LEZORAY, and HUBERT CARDOT. "TABU SEARCH MODEL SELECTION FOR SVM." International Journal of Neural Systems 18, no. 01 (2008): 19–31. http://dx.doi.org/10.1142/s0129065708001348.

Full text
Abstract:
A model selection method based on tabu search is proposed to build support vector machines (binary decision functions) of reduced complexity and efficient generalization. The aim is to build a fast and efficient support vector machines classifier. A criterion is defined to evaluate the decision function quality which blends recognition rate and the complexity of a binary decision functions together. The selection of the simplification level by vector quantization, of a feature subset and of support vector machines hyperparameters are performed by tabu search method to optimize the defined decision function quality criterion in order to find a good sub-optimal model on tractable times.
APA, Harvard, Vancouver, ISO, and other styles
12

Esther Ratna, T., and N. Subash Chandra. "Binary Plane Technique Based Color Quantization for Content Based Image Retrieval." International Journal of Engineering & Technology 7, no. 3.1 (2018): 124. http://dx.doi.org/10.14419/ijet.v7i3.1.16814.

Full text
Abstract:
Extracting accurate informative file from a high volume of graphic files is a challenging task. This paper focus on presenting a new color indexing approach using the histogram features. Two histogram features like maximum color histogram and minimum color histogram are computed and are vector quantized to constitute a feature vector. Bit plane technique is used to map these features based upon it value at the respective position. The ultimate goal of any retrieval method is to attain higher precision within a short span of time that could be achieved if the data is in compressed to accomplish this the image is compressed using binary plane technique. The result analysis depicts the performance of the proposed approach under lossy and lossless modes and found that when operated in lossy it attain effective precision rate in a speculated amount of time.
APA, Harvard, Vancouver, ISO, and other styles
13

Mehes, A., and K. Zeger. "Binary lattice vector quantization with linear block codes and affine index assignments." IEEE Transactions on Information Theory 44, no. 1 (1998): 79–94. http://dx.doi.org/10.1109/18.650990.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Sultan, M., D. A. Wigle, C. A. Cumbaa, et al. "Binary tree-structured vector quantization approach to clustering and visualizing microarray data." Bioinformatics 18, Suppl 1 (2002): S111—S119. http://dx.doi.org/10.1093/bioinformatics/18.suppl_1.s111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Siantar, Nickolas Cornelius, Jaqnson Hendryli, and Dyah Erny Herwindiati. "CONTENT-BASED IMAGE RETRIEVAL UNTUK PENCARIAN PRODUK PONSEL." Computatio : Journal of Computer Science and Information Systems 3, no. 1 (2019): 31. http://dx.doi.org/10.24912/computatio.v3i1.4271.

Full text
Abstract:
Phone or smartphone and online shop, there is something that cannot be separated with human. There are so many type of smartphones show up in the market that people are confused on which one to get on the online stores. Smartphones recognition is done by using the Histogram of Oriented Gradient to recognize shapes of phones, Color Quantization to recognize the color, and Local Binary Pattern to recognize texture of the phones. The output of the Feature Extractor is a feature vector which is used on the LVQ to process recognize through finding the smallest Euclidean Distance between the trained vectors. The result of this paper is an application that can recognize 16 phone types using the image with the accuracy of 9.6%. Pada saat ini, ponsel dan toko online merupakan sesuatu yang tidak dapat dipisahkan dari manusia. Begitu banyak jenis ponsel bermunculan setiap tahunnya sehingga menyebabkan manusia bingung dalam mengenali ponsel tersebut. Pada program pengenalan ponsel ini digunakan Histogram of Oriented Gradient untuk mengambil fitur berupa bentuk ponsel, Color Quantization untuk mengambil fitur warna, dan Local Binary Pattern untuk mengambil fitur tekstur ponsel. Hasil dari pengambilan fitur berupa fitur vektor yang digunakan pada Learning Vector Quantization untuk proses pengenalan dengan mencari nilai terkecil Euclidean Distance antara vektor fitur dengan vektor bobot terlatih. Hasil dari program pengenalan ini yaitu program dapat melakukan pengenalan terhadap 16 jenis ponsel dengan akurasi sebesar 9.6%.
APA, Harvard, Vancouver, ISO, and other styles
16

Fanggidae, Adriana, Dony M. Sihotang, and Adnan Putra Rihi Pati. "PENGENALAN POLA SIDIK JARI DENGAN METODE LOCAL BINARY PATTERN DAN LEARNING VECTOR QUANTIZATION." Jurnal Komputer dan Informatika 7, no. 2 (2019): 148–56. http://dx.doi.org/10.35508/jicon.v7i2.1635.

Full text
Abstract:
Sidik jari merupakan strukur genetika dalam bentuk pola yang sangat detail dan tanda yang melekat pada diri manusia. Banyak sistem biometrika yang menggunakan sidik jari sebagai data masukan, karena sifat dari sidik jari setiap individu berbeda meskipun kembar identik dan tidak berubah kecuali mendapat kecelakaan. Metode yang digunakan dalam penelitian ini yaitu segmentasi dengan algoritma Otsu thresholding, ekstraksi ciri dengan algoritma Local Binary Pattern (LBP), dan pembelajaran dengan algoritma Learning Vector Quantization (LVQ). Data yang digunakan adalah citra sidik jari jempol berukuran 200 x 300 piksel, berjenis keabuan dan berformat *.jpg. Citra sidik jari terdiri dari 25 orang, masing-masing orang memiliki 6 data latih dan 2 data uji. Pengujian data latih dan data uji dilakukan kepada empat sistem yaitu sistem dengan jumlah ciri LBP = 8, 64, 128 dan 256 dan menggunakan masing-masing 2 buah data set dimana data set 1 berjumlah 15 orang dan data set 2 berjumlah 25 orang. Hasil pengujian keempat sistem menunjukkan bahwa sistem dengan jumlah ciri LBP = 128 merupakan sistem yang terbaik dengan kombinasi akurasi sistem yang tinggi dan juga waktu pembelajaran yang cepat.
APA, Harvard, Vancouver, ISO, and other styles
17

Yeh, Cheng-Yu, and Hung-Hsun Huang. "An Upgraded Version of the Binary Search Space-Structured VQ Search Algorithm for AMR-WB Codec." Symmetry 11, no. 2 (2019): 283. http://dx.doi.org/10.3390/sym11020283.

Full text
Abstract:
Adaptive multi-rate wideband (AMR-WB) speech codecs have been widely used for high speech quality in modern mobile communication systems, e.g., handheld mobile devices. Nevertheless, a major handicap is that a remarkable computational load is required in the vector quantization (VQ) of immittance spectral frequency (ISF) coefficients of an AMR-WB coding. In view of this, a two-stage search algorithm is presented in this paper as an efficient way to reduce the computational complexity of ISF quantization in AMR-WB coding. At stage 1, an input vector is assigned to a search subspace in an efficient manner using the binary search space-structured VQ (BSS-VQ) algorithm, and a codebook search is performed over the subspace at stage 2 using the iterative triangular inequality elimination (ITIE) approach. Through the use of the codeword rejection mechanisms equipped in both stages, the computational load can be remarkably reduced. As compared with the original version of the BSS-VQ algorithm, the upgraded version provides a computational load reduction of up to 51%. Furthermore, this work is expected to satisfy the energy saving requirement when implemented on an AMR-WB codec of mobile devices.
APA, Harvard, Vancouver, ISO, and other styles
18

Rahayu, Ni Made Yeni Dwi, Made Windu Antara Kesiman, and I. Gede Aris Gunadi. "Identifikasi Jenis Kayu Berdasarkan Fitur Tekstur Local Binary Pattern Menggunakan Metode Learning Vector Quantization." Jurnal Nasional Pendidikan Teknik Informatika (JANAPATI) 10, no. 3 (2021): 157. http://dx.doi.org/10.23887/janapati.v10i3.40804.

Full text
Abstract:
Pada umumnya pengenalan jenis kayu masih dilakukan dengan menggunakan indera penglihatan dan penciuman. Hal tersebut dapat mempengaruhi proses jual beli dimana waktu yang dibutuhkan untuk pengenalan kayu menjadi lebih lama sehingga menyebabkan proses bisnis menjadi kurang efektif. Penelitian ini bertujuan untuk membangun suatu model machine learning untuk proses identifikasi jenis kayu berdasarkan fitur teksur citra pada kayu. Metode Local Binary Pattern (LBP) digunakan dalam proses ekstraksi ciri untuk menghasilkan vektor ciri yang dijadikan data input pada proses klasifikasi citra dengan menggunakan metode Learning Vector Quantization (LVQ). Parameter yang digunakan pada metode LBP meliputi numpoint dan radius dengan nilai 1 sampai 10. Hasil penelitian dari metode ini didapatkan akurasi tertinggi 68,33% pada numpoint 2 dan radius 1. Hasil pengujian yang cukup rendah dapat dipengaruhi oleh beberapa faktor yaitu jumlah citra latih dan terdapat beberapa citra kayu memiliki pola yang hampir sama.
APA, Harvard, Vancouver, ISO, and other styles
19

Dimitrov, Vassil, Richard Ford, Laurent Imbert, Arjuna Madanayake, Nilan Udayanga, and Will Wray. "Multiple-base Logarithmic Quantization and Application in Reduced Precision AI Computations." Digital Presentation and Preservation of Cultural and Scientific Heritage 14 (September 5, 2024): 63–70. http://dx.doi.org/10.55630/dipp.2024.14.5.

Full text
Abstract:
The power of logarithmic quantizations and computations has been recognized as a useful tool in optimizing the performance of large ML models. There are plenty of applications of ML techniques in digital preservation. The accuracy of computations may play a crucial role in the corresponding algorithms. In this article, we provide results that demonstrate significantly better quantization signal-to-noise ratio performance thanks to multiple-base logarithmic number systems (MDLNS) in comparison with the floating point quantizations that use the same number of bits. On a hardware level, we present details about our Xilinx VCU-128 FPGA design for dot product and matrix vector computations. The MDLNS matrix-vector design significantly outperforms equivalent fixed-point binary designs in terms of area (A) and time (T) complexity and power consumption as evidenced by a 4 × scaling of AT 2 metric for VLSI performance, and 57% increase in computational throughput per watt compared to fixed-point arithmetic.
APA, Harvard, Vancouver, ISO, and other styles
20

Singh Susaiyah, Allmin Pradhap, Suhail Parvaze Pathan, and Ramakrishnan Swaminathan. "Classification of indirect immunofluorescence images using thresholded local binary count features." Current Directions in Biomedical Engineering 2, no. 1 (2016): 479–82. http://dx.doi.org/10.1515/cdbme-2016-0106.

Full text
Abstract:
AbstractComputer aided classification of HEp-2 cell based indirect immunofluorescence (IIF) images is a recommended procedure for standardising autoimmune disease diagnostics. In this work a novel feature, the thresholded local binary count (TLBC) has been proposed to classify IIF images into one among six classes. The TLBC is rotational invariant and is insensitive to pixel quantization noise. It characterizes the local binary gray scale pixel information in an image. The proposed feature along with global features such as area, entropy, illumination level and mean intensity, when classified using a support vector machine gave an accuracy of 86%. This feature could help in improving the diagnostics of autoimmune diseases which is highly clinically significant.
APA, Harvard, Vancouver, ISO, and other styles
21

Ying, Qian, and Ye Qingqing. "Deep Supervised Hashing for Fast Multi-Label Image." MATEC Web of Conferences 173 (2018): 03032. http://dx.doi.org/10.1051/matecconf/201817303032.

Full text
Abstract:
In this paper, most of the existing Hashing methods is mapping the hand extracted features to binary code, and designing the loss function with the label of images. However, hand-crafted features and inadequacy considering all the loss of the network will reduce the retrieval accuracy. Supervised hashing method improves the similarity between sample and hash code by training data and labels of image. In this paper, we propose a novel deep hashing method which combines the objective function with pairwise label which is produced by the Hamming distance between the label binary vector of images, quantization error and the loss of hashing code between the balanced value as loss function to train network. The experimental results show that the proposed method is more accurate than most of current restoration methods.
APA, Harvard, Vancouver, ISO, and other styles
22

Hussain, Abid, Heng-Chao Li, Muqadar Ali, Samad Wali, Mehboob Hussain, and Amir Rehman. "An Efficient Supervised Deep Hashing Method for Image Retrieval." Entropy 24, no. 10 (2022): 1425. http://dx.doi.org/10.3390/e24101425.

Full text
Abstract:
In recent years, searching and retrieving relevant images from large databases has become an emerging challenge for the researcher. Hashing methods that mapped raw data into a short binary code have attracted increasing attention from the researcher. Most existing hashing approaches map samples to a binary vector via a single linear projection, which restricts the flexibility of those methods and leads to optimization problems. We introduce a CNN-based hashing method that uses multiple nonlinear projections to produce additional short-bit binary code to tackle this issue. Further, an end-to-end hashing system is accomplished using a convolutional neural network. Also, we design a loss function that aims to maintain the similarity between images and minimize the quantization error by providing a uniform distribution of the hash bits to illustrate the proposed technique’s effectiveness and significance. Extensive experiments conducted on various datasets demonstrate the superiority of the proposed method in comparison with state-of-the-art deep hashing methods.
APA, Harvard, Vancouver, ISO, and other styles
23

Windridge, David, Riccardo Mengoni, and Rajagopal Nagarajan. "Quantum error-correcting output codes." International Journal of Quantum Information 16, no. 08 (2018): 1840003. http://dx.doi.org/10.1142/s0219749918400038.

Full text
Abstract:
Quantum machine learning is the aspect of quantum computing concerned with the design of algorithms capable of generalized learning from labeled training data by effectively exploiting quantum effects. Error-correcting output codes (ECOC) are a standard setting in machine learning for efficiently rendering the collective outputs of a binary classifier, such as the support vector machine, as a multi-class decision procedure. Appropriate choice of error-correcting codes further enables incorrect individual classification decisions to be effectively corrected in the composite output. In this paper, we propose an appropriate quantization of the ECOC process, based on the quantum support vector machine. We will show that, in addition to the usual benefits of quantizing machine learning, this technique leads to an exponential reduction in the number of logic gates required for effective correction of classification error.
APA, Harvard, Vancouver, ISO, and other styles
24

Kette, E. K. D., D. R. Sina, and B. S. Djahi. "Digital image processing: Offline handwritten signature identification using local binary pattern and rotational invariance local binary pattern with learning vector quantization." Journal of Physics: Conference Series 2017, no. 1 (2021): 012011. http://dx.doi.org/10.1088/1742-6596/2017/1/012011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Csóka, Tibor, Jaroslav Polec, Filip Csóka, and Kvetoslava Kotuliaková. "VQ-based model for binary error process." Journal of Electrical Engineering 68, no. 3 (2017): 167–79. http://dx.doi.org/10.1515/jee-2017-0025.

Full text
Abstract:
AbstractA variety of complex techniques, such as forward error correction (FEC), automatic repeat request (ARQ), hybrid ARQ or cross-layer optimization, require in their design and optimization phase a realistic model of binary error process present in a specific digital channel. Past and more recent modeling approaches focus on capturing one or more stochastic characteristics with precision sufficient for the desired model application, thereby applying concepts and methods severely limiting the model applicability (egin the form of modeled process prerequisite expectations). The proposed novel concept utilizing a Vector Quantization (VQ)-based approach to binary process modeling offers a viable alternative capable of superior modeling of most commonly observed small- and large-scale stochastic characteristics of a binary error process on the digital channel. Precision of the proposed model was verified using multiple statistical distances against the data captured in a wireless sensor network logical channel trace. Furthermore, the Pearson’s goodness of fit test of all model variants’ output was performed to conclusively demonstrate usability of the model for realistic captured binary error process. Finally, the presented results prove the proposed model applicability and its ability to far surpass the capabilities of the reference Elliot’s model.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Chunyu, Hui Ding, Yuanyuan Shang, Zhuhong Shao, and Xiaoyan Fu. "Gender Classification Based on Multiscale Facial Fusion Feature." Mathematical Problems in Engineering 2018 (November 4, 2018): 1–6. http://dx.doi.org/10.1155/2018/1924151.

Full text
Abstract:
For gender classification, we present a new approach based on Multiscale facial fusion feature (MS3F) to classify gender from face images. Fusion feature is extracted by the combination of Local Binary Pattern (LBP) and Local Phase Quantization (LPQ) descriptors, and a multiscale feature is generated through Multiblock (MB) and Multilevel (ML) methods. Support Vector Machine (SVM) is employed as the classifier to conduct gender classification. All the experiments are performed based on the Images of Groups (IoG) dataset. The results demonstrate that the application of Multiscale fusion feature greatly improves the performance of gender classification, and our approach outperforms the state-of-the-art techniques.
APA, Harvard, Vancouver, ISO, and other styles
27

Qian, Shen‐en. "Fast three‐dimensional data compression of hyperspectral imagery using vector quantization with spectral‐feature‐based binary coding." Optical Engineering 35, no. 11 (1996): 3242. http://dx.doi.org/10.1117/1.601062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Gupta, Amit. "Urban Land Chang Detection on Remote Sensing Images Based on Local Similarity Siamese Network." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 10, no. 2 (2019): 968–74. http://dx.doi.org/10.17762/turcomat.v10i2.13577.

Full text
Abstract:
Lloyd created the well-known signal quantization issue. We define a different, related problem: The best translation of digital fine grayscale images to a coarser scale (for example, medical imaging with 9–13 bits per pixel) (for instance, 8 bits per pixel on standard computer displays). Although the latter pertains to a mostly digital domain, the former problem is specified primarily in the actual signal domain with smoothly distributed noise. The conventional quantization methods are essentially inapplicable non typical scenarios of quantization of the previously digitised pictures, as we demonstrate in this study, due to this discrepancy. Through experimentation, we discovered that Lloyd's technique is greatly outperformed by a dynamic programming-based solution. The maintenance of any picture database must have two fundamental elements: data representation and content description. In this study, a wavelet-based system called the Waveguide is suggested, which unifies these two elements into a single framework. In this study, a unique way of rating the differences between two satellite photos obtained at various times is presented by this system for unsupervised change analysis. Change Vector Analysis Technique was employed in the current system of change analysis. The polar CVA representation serves as the foundation for this system. In the suggested method of change analysis, the Hamming distance, which is predicated upon binary descriptors, is utilised as a similarity metric.
APA, Harvard, Vancouver, ISO, and other styles
29

Ming Yuan Ting and E. A. Riskin. "Error-diffused image compression using a binary-to-gray-scale decoder and predictive pruned tree-structured vector quantization." IEEE Transactions on Image Processing 3, no. 6 (1994): 854–58. http://dx.doi.org/10.1109/83.336256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Berg, C. J., C. Mallary, John R. Buck, Amit Tandon, and Alan Andonian. "Acoustic rainfall estimation with support vector machines and error correcting output codes." Journal of the Acoustical Society of America 152, no. 4 (2022): A211. http://dx.doi.org/10.1121/10.0016041.

Full text
Abstract:
Ma and Nystuen (2005) successfully detected and estimated rainfall at sea from passive acoustics. They detected rain from three narrowband frequencies and then estimated log rainfall rate via a regression with energy in the 5 kHz band. Mallary et al. (2022) improved rainfall detection by exploiting broadband spectra while reducing the dimensionality through principal component analysis (PCA). This project builds upon Mallary’s work moving beyond detection to estimate the rainfall by quantization into discrete ranges based on PCA-reduced acoustic power spectra. The classification scheme combines multiple binary support-vector machine (SVM) classifiers (Boser et al. 1992) with Dietterich and Bakiri’s error-correcting output codes (1995) to classify acoustic PSDs into one of 6 rainfall rate classes. Evaluating the PCA/SVM classifier on 4 months of acoustic recordings and meteorological data collected from a shallow water pier in New Bedford, MA found the hourly accumulations from the rain gauge and acoustic estimates had a correlation of 0.97 ± 0.01. Emulating Ma & Nystuen’s estimator on the same data set yields a correlation of 0.76 ± 0.02. [Work supported by ONR.]
APA, Harvard, Vancouver, ISO, and other styles
31

Khoshboresh Masouleh, M., and M. R. Saradjian. "ROBUST BUILDING FOOTPRINT EXTRACTION FROM BIG MULTI-SENSOR DATA USING DEEP COMPETITION NETWORK." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W18 (October 18, 2019): 615–21. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w18-615-2019.

Full text
Abstract:
Abstract. Building footprint extraction (BFE) from multi-sensor data such as optical images and light detection and ranging (LiDAR) point clouds is widely used in various fields of remote sensing applications. However, it is still challenging research topic due to relatively inefficient building extraction techniques from variety of complex scenes in multi-sensor data. In this study, we develop and evaluate a deep competition network (DCN) that fuses very high spatial resolution optical remote sensing images with LiDAR data for robust BFE. DCN is a deep superpixelwise convolutional encoder-decoder architecture using the encoder vector quantization with classified structure. DCN consists of five encoding-decoding blocks with convolutional weights for robust binary representation (superpixel) learning. DCN is trained and tested in a big multi-sensor dataset obtained from the state of Indiana in the United States with multiple building scenes. Comparison results of the accuracy assessment showed that DCN has competitive BFE performance in comparison with other deep semantic binary segmentation architectures. Therefore, we conclude that the proposed model is a suitable solution to the robust BFE from big multi-sensor data.
APA, Harvard, Vancouver, ISO, and other styles
32

Nguyen, Thanh, Abbas Khosravi, Douglas Creighton, and Saeid Nahavandi. "Multi-Output Interval Type-2 Fuzzy Logic System for Protein Secondary Structure Prediction." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 23, no. 05 (2015): 735–60. http://dx.doi.org/10.1142/s0218488515500324.

Full text
Abstract:
A new multi-output interval type-2 fuzzy logic system (MOIT2FLS) is introduced for protein secondary structure prediction in this paper. Three outputs of the MOIT2FLS correspond to three structure classes including helix, strand (sheet) and coil. Quantitative properties of amino acids are employed to characterize twenty amino acids rather than the widely used computationally expensive binary encoding scheme. Three clustering tasks are performed using the adaptive vector quantization method to construct an equal number of initial rules for each type of secondary structure. Genetic algorithm is applied to optimally adjust parameters of the MOIT2FLS. The genetic fitness function is designed based on the Q3 measure. Experimental results demonstrate the dominance of the proposed approach against the traditional methods that are Chou-Fasman method, Garnier-Osguthorpe-Robson method, and artificial neural network models.
APA, Harvard, Vancouver, ISO, and other styles
33

Amit, Yali, and Donald Geman. "Shape Quantization and Recognition with Randomized Trees." Neural Computation 9, no. 7 (1997): 1545–88. http://dx.doi.org/10.1162/neco.1997.9.7.1545.

Full text
Abstract:
We explore a new approach to shape recognition based on a virtually infinite family of binary features (queries) of the image data, designed to accommodate prior information about shape invariance and regularity. Each query corresponds to a spatial arrangement of several local topographic codes (or tags), which are in themselves too primitive and common to be informative about shape. All the discriminating power derives from relative angles and distances among the tags. The important attributes of the queries are a natural partial ordering corresponding to increasing structure and complexity; semi-invariance, meaning that most shapes of a given class will answer the same way to two queries that are successive in the ordering; and stability, since the queries are not based on distinguished points and substructures. No classifier based on the full feature set can be evaluated, and it is impossible to determine a priori which arrangements are informative. Our approach is to select informative features and build tree classifiers at the same time by inductive learning. In effect, each tree provides an approximation to the full posterior where the features chosen depend on the branch that is traversed. Due to the number and nature of the queries, standard decision tree construction based on a fixed-length feature vector is not feasible. Instead we entertain only a small random sample of queries at each node, constrain their complexity to increase with tree depth, and grow multiple trees. The terminal nodes are labeled by estimates of the corresponding posterior distribution over shape classes. An image is classified by sending it down every tree and aggregating the resulting distributions. The method is applied to classifying handwritten digits and synthetic linear and nonlinear deformations of three hundred [Formula: see text] symbols. State-of-the-art error rates are achieved on the National Institute of Standards and Technology database of digits. The principal goal of the experiments on [Formula: see text] symbols is to analyze invariance, generalization error and related issues, and a comparison with artificial neural networks methods is presented in this context. [Figure: see text]
APA, Harvard, Vancouver, ISO, and other styles
34

MATTFELDT, TORSTEN. "CLASSIFICATION OF BINARY SPATIAL TEXTURES USING STOCHASTIC GEOMETRY, NONLINEAR DETERMINISTIC ANALYSIS AND ARTIFICIAL NEURAL NETWORKS." International Journal of Pattern Recognition and Artificial Intelligence 17, no. 02 (2003): 275–300. http://dx.doi.org/10.1142/s0218001403002332.

Full text
Abstract:
Stereology and stochastic geometry can be used as auxiliary tools for diagnostic purposes in tumour pathology. The role of first-order parameters and stochastic–geometric functions for the classification of the texture of biological tissues has been investigated recently. The volume fraction and surface area per unit volume, the pair correlation function and the centred quadratic contact density function of epithelium were estimated in three case series of benign and malignant lesions of glandular tissues. This approach was further extended by applying the Laslett test, i.e. a point process statistic computed after transformation of the convex tangent points of sectioned random sets from planar images. This method has not yet been applied to histological images so far. Also the nonlinear deterministic approach to tissue texture was applied by estimating the correlation dimension as a function of embedding dimension. We used the stochastic–geometric functions, the first-order parameters and the correlation dimensions for the classification of cases using various algorithms. Learning vector quantization was applied as neural paradigm. Applications included distinction between mastopathy and mammary cancer, between benign prostatic hyperplasia and prostatic cancer, and between chronic pancreatitis and pancreatic cancer. The same data sets were also classified with discriminant analysis and support vector machines. The stereological estimates provided high accuracy in the classification of individual cases. The question: which category of estimator is the most informative, cannot be answered globally, but must be explored empirically for each specific data set. The results obtained by the three algorithms were similar.
APA, Harvard, Vancouver, ISO, and other styles
35

Elawady, Iman, and Caner Ozcan. "Restoration of Images Compressed by Hybrid Compression, based on Discrete Cosine Transform and Vector Quantization, over a Binary Symmetric Channel." Acta Polytechnica Hungarica 21, no. 11 (2024): 213–28. http://dx.doi.org/10.12700/aph.21.11.2024.11.12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Macin, Gulay, Burak Tasci, Irem Tasci, et al. "An Accurate Multiple Sclerosis Detection Model Based on Exemplar Multiple Parameters Local Phase Quantization: ExMPLPQ." Applied Sciences 12, no. 10 (2022): 4920. http://dx.doi.org/10.3390/app12104920.

Full text
Abstract:
Multiple sclerosis (MS) is a chronic demyelinating condition characterized by plaques in the white matter of the central nervous system that can be detected using magnetic resonance imaging (MRI). Many deep learning models for automated MS detection based on MRI have been presented in the literature. We developed a computationally lightweight machine learning model for MS diagnosis using a novel handcrafted feature engineering approach. The study dataset comprised axial and sagittal brain MRI images that were prospectively acquired from 72 MS and 59 healthy subjects who attended the Ozal University Medical Faculty in 2021. The dataset was divided into three study subsets: axial images only (n = 1652), sagittal images only (n = 1775), and combined axial and sagittal images (n = 3427) of both MS and healthy classes. All images were resized to 224 × 224. Subsequently, the features were generated with a fixed-size patch-based (exemplar) feature extraction model based on local phase quantization (LPQ) with three-parameter settings. The resulting exemplar multiple parameters LPQ (ExMPLPQ) features were concatenated to form a large final feature vector. The top discriminative features were selected using iterative neighborhood component analysis (INCA). Finally, a k-nearest neighbor (kNN) algorithm, Fine kNN, was deployed to perform binary classification of the brain images into MS vs. healthy classes. The ExMPLPQ-based model attained 98.37%, 97.75%, and 98.22% binary classification accuracy rates for axial, sagittal, and hybrid datasets, respectively, using Fine kNN with 10-fold cross-validation. Furthermore, our model outperformed 19 established pre-trained deep learning models that were trained and tested with the same data. Unlike deep models, the ExMPLPQ-based model is computationally lightweight yet highly accurate. It has the potential to be implemented as an automated diagnostic tool to screen brain MRIs for white matter lesions in suspected MS patients.
APA, Harvard, Vancouver, ISO, and other styles
37

Andani, Medeline Widia, and Fitri Bimantoro. "Verifikasi Tanda Tangan Menggunakan Ekstraksi Fitur LBP dan Klasifikasi LVQ." Jurnal Teknologi Informasi, Komputer, dan Aplikasinya (JTIKA ) 2, no. 2 (2020): 208–16. http://dx.doi.org/10.29303/jtika.v2i2.107.

Full text
Abstract:
Signature is one of the media used for verification and legalization of information, such as documents that are closely related to legality. In general, signature verification is done manually by direct comparing, this is certainly not effective, especially if doing a lot verification. Therefore, we need a computer system that can automatically verify a person's signature to save time in matching and reducing errors. This research was conducted using feature of Local Binary Pattern (LBP) method and Learning Vector Quantization (LVQ) classifier. Materials that used in this research are 600 signature images with a size of 500x500 pixels taken from 30 respondents where each respondent taken 15 original signatures and 5 fake signatures. The results of this research are that the signature identification process resulted in 93% and the verification process resulted in an accuracy of 63%, a sensitivity of 89%, and a specificity of 42%.
APA, Harvard, Vancouver, ISO, and other styles
38

Hardika, Khusnuliawati, Fatichah Chastine, and Soelaiman Rully. "Multi-feature Fusion Using SIFT and LEBP for Finger Vein Recognition." TELKOMNIKA Telecommunication, Computing, Electronics and Control 15, no. 1 (2017): 478–85. https://doi.org/10.12928/TELKOMNIKA.v15i1.4443.

Full text
Abstract:
In this paper, multi-feature fusion using Scale Invariant Feature Transform (SIFT) and Local Extensive Binary Pattern (LEBP) was proposed to obtain a feature that could resist degradation problems such as scaling, rotation, translation and varying illumination conditions. SIFT feature had a capability to withstand degradation due to changes in the condition of the image scale, rotation and translation. Meanwhile, LEBP feature had resistance to gray level variations with richer and discriminatory local characteristics information. Therefore the fusion technique is used to collect important information from SIFT and LEBP feature.The resulting feature of multi-feature fusion using SIFT and LEBP feature would be processed by Learning Vector Quantization (LVQ) method to determine whether the testing image could be recognized or not. The accuracy value could achieve 97.50%, TPR at 0.9400 and FPR at 0.0128 in optimum condition. That was a better result than only use SIFT or LEBP feature.
APA, Harvard, Vancouver, ISO, and other styles
39

Padmanabhan, S. Anantha, and Krishna Kumar. "An Efficient Video Compression Encoder Based on Wavelet Lifting Scheme in LSK." Journal of Computational and Theoretical Nanoscience 13, no. 10 (2016): 7581–91. http://dx.doi.org/10.1166/jctn.2016.5756.

Full text
Abstract:
This paper presents a video compression system using wavelet lifting scheme. Video compression algorithms (“codecs”) manipulate video signals to dramatically reduce the storage and bandwidth required while maximizing the perceived video quality. There are four common methods for compression; discrete cosine transforms (DCT), vector quantization (VQ), fractal compression, and discrete wavelet transform (DWT). A gradient based motion estimation algorithm based on shapemotion prediction is used which takes advantage of the correlation between neighboring Binary Alpha Blocks (BABs), to match with the MPEG-4 shape coding case and speed up the estimation process. Then a non-redundant wavelet transform has been implemented as an iterative filter banks with down sampling operations. LSK operates without lists and is suitable for a fast, simple hardware implementation. Here the Set Partitioned Embedded bloCK coder (SPECK) image compression called Improved Listless SPECK (ILSPECK) is used. ILSPECK code a single zero to several insignificant subbands. This reduces the length of the output bit string as well as encoding/decoding time.
APA, Harvard, Vancouver, ISO, and other styles
40

Balidis, Miltos, Ioanna Papadopoulou, Dimitris Malandris, et al. "Using neural networks to predict the outcome of refractive surgery for myopia." 4open 2 (2019): 29. http://dx.doi.org/10.1051/fopen/2019024.

Full text
Abstract:
Introduction: Refractive Surgery (RS), has advanced immensely in the last decades, utilizing methods and techniques that fulfill stringent criteria for safety, efficacy, cost-effectiveness, and predictability of the refractive outcome. Still, a non-negligible percentage of RS require corrective retreatment. In addition, surgeons should be able to advise their patients, beforehand, as to the probability that corrective RS will be necessary. The present article addresses these issues with regard to myopia and explores the use of Neural Networks as a solution to the problem of the prediction of the RS outcome. Methods: We used a computerized query to select patients who underwent RS with any of the available surgical techniques (PRK, LASEK, Epi-LASIK, LASIK) between January 2010 and July 2017 and we investigated 13 factors which are related to RS. The data were normalized by forcing the weights used in the forward and backward propagations to be binary; each integer was represented by a 12-bit serial code, so that following this preprocessing stage, the vector of the data values of all 13 parameters was encoded in a binary vector of 1 × (13 × 12) = 1 × 156 size. Following the preprocessing stage, eight independent Learning Vector Quantization (LVQ) networks were created in random way using the function Ivqnet of Matlab, each one of them responding to one query with (0 retreat class) or (1 correct class). The results of the eight LVQs were then averaged to permit a best estimate of the network’s performance while a voting procedure by the neural nets was used to arrive at the outcome Results: Our algorithm was able to predict in a statistically significant way (as evidenced by Cohen’s Kappa test result of 0.7595) the need for retreatment after initial RS with good sensitivity (0.8756) and specificity (0.9286). Conclusion: The results permit us to be optimistic about the future of using neural networks for the prediction of the outcome and, eventually, the planning of RS.
APA, Harvard, Vancouver, ISO, and other styles
41

Sarpeshkar, Rahul, and Micah O'Halloran. "Scalable Hybrid Computation with Spikes." Neural Computation 14, no. 9 (2002): 2003–38. http://dx.doi.org/10.1162/089976602320263971.

Full text
Abstract:
We outline a hybrid analog-digital scheme for computing with three important features that enable it to scale to systems of large complexity: First, like digital computation, which uses several one-bit precise logical units to collectively compute a precise answer to a computation, the hybrid scheme uses several moderate-precision analog units to collectively compute a precise answer to a computation. Second, frequent discrete signal restoration of the analog information prevents analog noise and offset from degrading the computation. And, third, a state machine enables complex computations to be created using a sequence of elementary computations. A natural choice for implementing this hybrid scheme is one based on spikes because spike-count codes are digital, while spike-time codes are analog. We illustrate how spikes afford easy ways to implement all three components of scalable hybrid computation. First, as an important example of distributed analog computation, we show how spikes can create a distributed modular representation of an analog number by implementing digital carry interactions between spiking analog neurons. Second, we show how signal restoration may be performed by recursive spike-count quantization of spike-time codes. And, third, we use spikes from an analog dynamical system to trigger state transitions in a digital dynamical system, which reconfigures the analog dynamical system using a binary control vector; such feedback interactions between analog and digital dynamical systems create a hybrid state machine (HSM). The HSM extends and expands the concept of a digital finite-state-machine to the hybrid domain. We present experimental data from a two-neuron HSM on a chip that implements error-correcting analog-to-digital conversion with the concurrent use of spike-time and spike-count codes. We also present experimental data from silicon circuits that implement HSM-based pattern recognition using spike-time synchrony. We outline how HSMs may be used to perform learning, vector quantization, spike pattern recognition and generation, and how they may be reconfigured.
APA, Harvard, Vancouver, ISO, and other styles
42

Telli, Hichem, Salim Sbaa, Salah Eddine Bekhouche, Fadi Dornaika, Abdelmalik Taleb-Ahmed, and Miguel Bordallo López. "A Novel Multi-Level Pyramid Co-Variance Operators for Estimation of Personality Traits and Job Screening Scores." Traitement du Signal 38, no. 3 (2021): 539–46. http://dx.doi.org/10.18280/ts.380301.

Full text
Abstract:
Recently, automatic personality analysis is becoming an interesting topic for computer vision. Many attempts have been proposed to solve this problem using time-based sequence information. In this paper, we present a new framework for estimating the Big-Five personality traits and job candidate screening variable from video sequences. The framework consists of two parts: (1) the use of Pyramid Multi-level (PML) to extract raw facial textures at different scales and levels; (2) the extension of the Covariance Descriptor (COV) to fuse different local texture features of the face image such as Local Binary Patterns (LBP), Local Directional Pattern (LDP), Binarized Statistical Image Features (BSIF), and Local Phase Quantization (LPQ). Therefore, the COV descriptor uses the textures of PML face parts to generate rich low-level face features that are encoded using concatenation of all PML blocks in a feature vector. Finally, the entire video sequence is represented by aggregating these frame vectors and extracting the most relevant features. The exploratory results on the ChaLearn LAP APA2016 dataset compare well with state-of-the-art methods including deep learning-based methods.
APA, Harvard, Vancouver, ISO, and other styles
43

Ren, Hua, Shaozhang Niu, Haiju Fan, Ming Li, and Zhen Yue. "Secure Image Authentication Scheme Using Double Random-Phase Encoding and Compressive Sensing." Security and Communication Networks 2021 (September 27, 2021): 1–20. http://dx.doi.org/10.1155/2021/6978772.

Full text
Abstract:
Double random-phase encoding- (DRPE-) based compressive sensing (CS) systems support image authentication for noisy images. When extending such systems to resource-constrained applications, how to ensure the authentication strength for noisy images becomes challenging. To tackle the issue, an efficient and secure image authentication scheme is presented. The phase information of the plain image is generated using DRPE and quantized into a binary image as the authentication information. Meanwhile, a sparser error matrix generated by the same plain image and vector quantization (VQ) image works as the input of CS. The authentication information and VQ indexes are self-hidden into the quantized measurements to construct the combined image. Then, it is permutated and diffused with the chaotic sequences generated from a modified Henon map. After decryption at the receiver side, the verifier can implement the blind authentication between the noisy decoded image and the reconstructed image. Supported by the detailed numerical simulations and theoretical analyses, the DRPE-CSVQ exhibits more powerful compression and authentication capability than its counterpart.
APA, Harvard, Vancouver, ISO, and other styles
44

Nakagawa, Shota, Naoaki Ono, Yukichika Hakamata, et al. "Quantitative evaluation model of variable diagnosis for chest X-ray images using deep learning." PLOS Digital Health 3, no. 3 (2024): e0000460. http://dx.doi.org/10.1371/journal.pdig.0000460.

Full text
Abstract:
The purpose of this study is to demonstrate the use of a deep learning model in quantitatively evaluating clinical findings typically subject to uncertain evaluations by physicians, using binary test results based on routine protocols. A chest X-ray is the most commonly used diagnostic tool for the detection of a wide range of diseases and is generally performed as a part of regular medical checkups. However, when it comes to findings that can be classified as within the normal range but are not considered disease-related, the thresholds of physicians’ findings can vary to some extent, therefore it is necessary to define a new evaluation method and quantify it. The implementation of such methods is difficult and expensive in terms of time and labor. In this study, a total of 83,005 chest X-ray images were used to diagnose the common findings of pleural thickening and scoliosis. A novel method for quantitatively evaluating the probability that a physician would judge the images to have these findings was established. The proposed method successfully quantified the variation in physicians’ findings using a deep learning model trained only on binary annotation data. It was also demonstrated that the developed method could be applied to both transfer learning using convolutional neural networks for general image analysis and a newly learned deep learning model based on vector quantization variational autoencoders with high correlations ranging from 0.89 to 0.97.
APA, Harvard, Vancouver, ISO, and other styles
45

Shelke, Mrs Vishakha, Mr Vinay Manish Shah, Mr Harsh Ratnani, and Mr Rahul Despande. "Diabetic Retinopathy Detection Using SVM." International Journal for Research in Applied Science and Engineering Technology 10, no. 4 (2022): 868–75. http://dx.doi.org/10.22214/ijraset.2022.41275.

Full text
Abstract:
Abstract: Innovation is getting progressed step by step in pretty much every field. This work includes the detection of Diabetic Retinopathy (DR). Diabetes happens when the pancreas neglects to emit sufficient insulin, and gradually influences the retina of the natural eye. As it advances, the vision of a patient begins deteriorating (depleting), prompting diabetic retinopathy. In such manner, retinal pictures gained through fundal camera help in investigating the outcomes, nature, and status of the impact of diabetes on the eye. The main aim of this study is Age-related Macular Degeneration (AMD) through Local Binary Patterns (LBP) and further trial and error utilizing Gray-Level Co-Occurrence Matrix (GLCM). For this reason, the presentation of Gray level Co-Occurrence Matrix (GLCM) as a surface descriptor for retinal pictures has been investigated and contrasted and different descriptors, for example, GLCM filtering (GLCMF) and local phase quantization (LPQ). This will take to the of blood vessel highlights, for example, energy, differentiation, correlation and homogeneity values. We involve SVM as a classifier to recognize valid and bogus vessels. The conclusion of diabetic retinopathy depends on clinical eye assessment and eye fundus imaging. Keywords: Machine Learning, Support Vector Machine, Mat lab, Histogram Equalization
APA, Harvard, Vancouver, ISO, and other styles
46

Alharbi, Abir. "A Genetic-LVQ neural networks approach for handwritten Arabic character recognition." Artificial Intelligence Research 7, no. 2 (2018): 43. http://dx.doi.org/10.5430/air.v7n2p43.

Full text
Abstract:
Handwritten recognition systems are a dynamic field of research in areas of artificial intelligence. Many smart devices available in the market such as pen-based computers, tablets, mobiles with handwritten recognition technology need to rely on efficient handwritten recognition systems. In this paper we present a novel Arabic character handwritten recognition system based on a hybrid method consisting of a genetic algorithm and a Learning vector quantization (LVQ) neural network. Sixty different handwritten Arabic character datasets are used for training the neural network. Each character dataset contains 28 letters written twice with 15 distinct shaped alphabets, and each handwritten Arabic letter is represented by a binary matrix that is used as an input to a genetic algorithm for feature selection and dimension reduction to include only the most effective features to be fed to the LVQ classifier. The recognition process in the system involves several essential steps such as: handwritten letter acquisition, dataset preparation, feature selection, training, and recognition. Comparing our results to those acquired by the whole feature dataset without selection, and to the results using other classification algorithms confirms the effectiveness of our proposed handwritten recognition system with an accuracy of 95.4%, hence, showing a promising potential for improving future handwritten Arabic recognition devices in the market.
APA, Harvard, Vancouver, ISO, and other styles
47

Soleymanlou, Nima, Igor Jurisica, Ori Nevo, et al. "Molecular Evidence of Placental Hypoxia in Preeclampsia." Journal of Clinical Endocrinology & Metabolism 90, no. 7 (2005): 4299–308. http://dx.doi.org/10.1210/jc.2005-0078.

Full text
Abstract:
Abstract Background: Oxygen plays a central role in human placental pathologies including preeclampsia, a leading cause of fetal and maternal death and morbidity. Insufficient uteroplacental oxygenation in preeclampsia is believed to be responsible for the molecular events leading to the clinical manifestations of this disease. Design: Using high-throughput functional genomics, we determined the global gene expression profiles of placentae from high altitude pregnancies, a natural in vivo model of chronic hypoxia, as well as that of first-trimester explants under 3 and 20% oxygen, an in vitro organ culture model. We next compared the genomic profile from these two models with that obtained from pregnancies complicated by preeclampsia. Microarray data were analyzed using the binary tree-structured vector quantization algorithm, which generates global gene expression maps. Results: Our results highlight a striking global gene expression similarity between 3% O2-treated explants, high-altitude placentae, and importantly placentae from preeclamptic pregnancies. We demonstrate herein the utility of explant culture and high-altitude placenta as biologically relevant and powerful models for studying the oxygen-mediated events in preeclampsia. Conclusion: Our results provide molecular evidence that aberrant global placental gene expression changes in preeclampsia may be due to reduced oxygenation and that these events can successfully be mimicked by in vivo and in vitro models of placental hypoxia.
APA, Harvard, Vancouver, ISO, and other styles
48

Nugroho, Waego Hadi, Samingun Handoyo, and Yusnita Julyarni Akri. "An Influence of Measurement Scale of Predictor Variable on Logistic Regression Modeling and Learning Vector Quntization Modeling for Object Classification." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 1 (2018): 333. http://dx.doi.org/10.11591/ijece.v8i1.pp333-343.

Full text
Abstract:
Much real world decision making is based on binary categories of information that agree or disagree, accept or reject, succeed or fail and so on. Information of this category is the output of a classification method that is the domain of statistical field studies (eg Logistic Regression method) and machine learning (eg Learning Vector Quantization (LVQ)). The input argument of a classification method has a very crucial role to the resulting output condition. This paper investigated the influence of various types of input data measurement (interval, ratio, and nominal) to the performance of logistic regression method and LVQ in classifying an object. Logistic regression modeling is done in several stages until a model that meets the suitability model test is obtained. Modeling on LVQ was tested on several codebook sizes and selected the most optimal LVQ model. The best model of each method compared to its performance on object classification based on Hit Ratio indicator. In logistic regression model obtained 2 models that meet the model suitability test is a model with predictive variables scaled interval and nominal, while in LVQ modeling obtained 3 pieces of the most optimal model with a different codebook. In the data with interval-scale predictor variable, the performance of both methods is the same. The performance of both models is just as bad when the data have the predictor variables of the nominal scale. In the data with predictor variable has ratio scale, the LVQ method able to produce moderate enough performance, while on logistic regression modeling is not obtained the model that meet model suitability test. Thus if the input dataset has interval or ratio-scale predictor variables than it is preferable to use the LVQ method for modeling the object classification.
APA, Harvard, Vancouver, ISO, and other styles
49

Waego, Hadi Nugroho, Handoyo Samingun, and Julyarni Akri Yusnita. "An Influence of Measurement Scale of Predictor Variable on Logistic Regression Modeling and Learning Vector Quntization Modeling for Object Classification." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 1 (2018): 333–43. https://doi.org/10.11591/ijece.v8i1.pp333-343.

Full text
Abstract:
Much real world decision making is based on binary categories of information that agree or disagree, accept or reject, succeed or fail and so on. Information of this category is the output of a classification method that is the domain of statistical field studies (eg Logistic Regression method) and machine learning (eg Learning Vector Quantization (LVQ)). The input argument of a classification method has a very crucial role to the resulting output condition. This paper investigated the influence of various types of input data measurement (interval, ratio, and nominal) to the performance of logistic regression method and LVQ in classifying an object. Logistic regression modeling is done in several stages until a model that meets the suitability model test is obtained. Modeling on LVQ was tested on several codebook sizes and selected the most optimal LVQ model. The best model of each method compared to its performance on object classification based on Hit Ratio indicator. In logistic regression model obtained 2 models that meet the model suitability test is a model with predictive variables scaled interval and nominal, while in LVQ modeling obtained 3 pieces of the most optimal model with a different codebook. In the data with interval-scale predictor variable, the performance of both methods is the same. The performance of both models is just as bad when the data have the predictor variables of the nominal scale. In the data with predictor variable has ratio scale, the LVQ method able to produce moderate enough performance, while on logistic regression modeling is not obtained the model that meet model suitability test. Thus if the input dataset has interval or ratio-scale predictor variables than it is preferable to use the LVQ method for modeling the object classification.
APA, Harvard, Vancouver, ISO, and other styles
50

Hakim, Faruq Abdul, Tio Dharmawan, and Muhamad Arief Hidayat. "Gender classification performance optimization based on facial images using LBG-VQ and MB-LBP." International Journal of Advances in Intelligent Informatics 11, no. 1 (2025): 72. https://doi.org/10.26555/ijain.v11i1.1827.

Full text
Abstract:
In the computer vision and machine learning field, especially for gender classification based on facial images, feature extraction is one of the inseparable parts. Various features can be extracted from images, including texture features. Several prior studies show that the Linde Buzo gray vector quantization (LBG-VQ) and Multi-block local binary pattern (MB-LBP) methods can extract texture features from images. The LBG-VQ produces less optimal performance in gender classification on the FEI facial images dataset. On the other hand, the MB-LBP produces more optimal performance when applied to the FERET facial images dataset. Therefore, this study was conducted to discover the gender classification performance when the LBG-VQ and MB-LBP methods are implemented independently or in combination on the FEI facial images dataset. Three preprocessing stages are involved before extracting images' features: noise removal, illumination adjustment, and image conversion from RGB to grayscale. The extracted features are then used as training material for several classification methods, namely Naïve Bayes, SVM, KNN, Random Forest, and Logistic Regression. Then, the K-Fold Cross Validation method is used to evaluate the trained models. This study discovered that the implementation of MB-LBP tends to show a performance improvement compared to the LBG-VQ. Furthermore, the most optimal classification model, with a performance of 91.928%, was formed by implementing Logistic Regression with MB-LBP on LBG-VQ quantized images. In conclusion, this study successfully formed an optimized gender classification model based on the FEI facial images dataset.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!