Добірка наукової літератури з теми "FEATURE ENCODING"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "FEATURE ENCODING".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "FEATURE ENCODING"

1

Lathroum, Amanda. "Feature encoding by neural nets." Phonology 6, no. 2 (1989): 305–16. http://dx.doi.org/10.1017/s0952675700001044.

Повний текст джерела
Анотація:
While the use of categorical features seems to be the appropriate way to express sound patterns within languages, these features do not seem adequate to describe the sounds actually produced by speakers. Examination of the speech signal fails to reveal objective, discrete phonological segments. Similarly, segments are not directly observable in the flow of articulatory movements, and vary slightly according to an individual speaker's articulatory strategies. Because of the lack of a reliable relationship between segments and speech sounds, a plausible transition from feature representation to the actual acoustic signal has proven elusive. This paper utilises a theory of information processing, known as PARALLEL DISTRIBUTED PROCESSING (PDP) NETWORKS (also called neural networks), to propose a model which begins to express this transition: translating the feature bundles indicated in a broad phonetic transcription into continuous, potentially variable articulator behaviour.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Jaswal, Snehlata, and Robert H. Logie. "Configural encoding in visual feature binding." Journal of Cognitive Psychology 23, no. 5 (2011): 586–603. http://dx.doi.org/10.1080/20445911.2011.570256.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wu, Pengxiang, Chao Chen, Jingru Yi, and Dimitris Metaxas. "Point Cloud Processing via Recurrent Set Encoding." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5441–49. http://dx.doi.org/10.1609/aaai.v33i01.33015441.

Повний текст джерела
Анотація:
We present a new permutation-invariant network for 3D point cloud processing. Our network is composed of a recurrent set encoder and a convolutional feature aggregator. Given an unordered point set, the encoder firstly partitions its ambient space into parallel beams. Points within each beam are then modeled as a sequence and encoded into subregional geometric features by a shared recurrent neural network (RNN). The spatial layout of the beams is regular, and this allows the beam features to be further fed into an efficient 2D convolutional neural network (CNN) for hierarchical feature aggregation. Our network is effective at spatial feature learning, and competes favorably with the state-of-the-arts (SOTAs) on a number of benchmarks. Meanwhile, it is significantly more efficient compared to the SOTAs.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Eurich, Christian W., and Stefan D. Wilke. "Multidimensional Encoding Strategy of Spiking Neurons." Neural Computation 12, no. 7 (2000): 1519–29. http://dx.doi.org/10.1162/089976600300015240.

Повний текст джерела
Анотація:
Neural responses in sensory systems are typically triggered by a multitude of stimulus features. Using information theory, we study the encoding accuracy of a population of stochastically spiking neurons characterized by different tuning widths for the different features. The optimal encoding strategy for representing one feature most accurately consists of narrow tuning in the dimension to be encoded, to increase the single-neuron Fisher information, and broad tuning in all other dimensions, to increase the number of active neurons. Extremely narrow tuning without sufficient receptive field overlap will severely worsen the coding. This implies the existence of an optimal tuning width for the feature to be encoded. Empirically, only a subset of all stimulus features will normally be accessible. In this case, relative encoding errors can be calculated that yield a criterion for the function of a neural population based on the measured tuning curves.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Shinomiya, Yuki, and Yukinobu Hoshino. "A Quantitative Quality Measurement for Codebook in Feature Encoding Strategies." Journal of Advanced Computational Intelligence and Intelligent Informatics 21, no. 7 (2017): 1232–39. http://dx.doi.org/10.20965/jaciii.2017.p1232.

Повний текст джерела
Анотація:
Nowadays, a feature encoding strategy is a general approach to represent a document, an image or audio as a feature vector. In image recognition problems, this approach treats an image as a set of partial feature descriptors. The set is then converted to a feature vector based on basis vectors called codebook. This paper focuses on a prior probability, which is one of codebook parameters and analyzes dependency for the feature encoding. In this paper, we conducted the following two experiments, analysis of prior probabilities in state-of-the-art encodings and control of prior probabilities. The first experiment investigates the distribution of prior probabilities and compares recognition performances of recent techniques. The results suggest that recognition performance probably depends on the distribution of prior probabilities. The second experiment tries further statistical analysis by controlling the distribution of prior probabilities. The results show a strong negative linear relationship between a standard deviation of prior probabilities and recognition accuracy. From these experiments, the quality of codebook used for feature encoding can be quantitatively measured, and recognition performances can be improved by optimizing codebook. Besides, the codebook is created at an offline step. Therefore, optimizing codebook does not require any additional computational cost for practical applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ronran, Chirawan, Seungwoo Lee, and Hong Jun Jang. "Delayed Combination of Feature Embedding in Bidirectional LSTM CRF for NER." Applied Sciences 10, no. 21 (2020): 7557. http://dx.doi.org/10.3390/app10217557.

Повний текст джерела
Анотація:
Named Entity Recognition (NER) plays a vital role in natural language processing (NLP). Currently, deep neural network models have achieved significant success in NER. Recent advances in NER systems have introduced various feature selections to identify appropriate representations and handle Out-Of-the-Vocabulary (OOV) words. After selecting the features, they are all concatenated at the embedding layer before being fed into a model to label the input sequences. However, when concatenating the features, information collisions may occur and this would cause the limitation or degradation of the performance. To overcome the information collisions, some works tried to directly connect some features to latter layers, which we call the delayed combination and show its effectiveness by comparing it to the early combination. As feature encodings for input, we selected the character-level Convolutional Neural Network (CNN) or Long Short-Term Memory (LSTM) word encoding, the pre-trained word embedding, and the contextual word embedding and additionally designed CNN-based sentence encoding using a dictionary. These feature encodings are combined at early or delayed position of the bidirectional LSTM Conditional Random Field (CRF) model according to each feature’s characteristics. We evaluated the performance of this model on the CoNLL 2003 and OntoNotes 5.0 datasets using the F1 score and compared the delayed combination model with our own implementation of the early combination as well as the previous works. This comparison convinces us that our delayed combination is more effective than the early one and also highly competitive.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

James, Melissa S., Stuart J. Johnstone, and William G. Hayward. "Event-Related Potentials, Configural Encoding, and Feature-Based Encoding in Face Recognition." Journal of Psychophysiology 15, no. 4 (2001): 275–85. http://dx.doi.org/10.1027//0269-8803.15.4.275.

Повний текст джерела
Анотація:
Abstract The effects of manipulating configural and feature information on the face recognition process were investigated by recording event-related potentials (ERPs) from five electrode sites (Fz, Cz, Pz, T5, T6), while 17 European subjects performed an own-race and other-race face recognition task. A series of upright faces were presented in a study phase, followed by a test phase where subjects indicated whether inverted and upright faces were studied or novel via a button press response. An inversion effect, illustrating the disruption of upright configural information, was reflected in accuracy measures and in greater lateral N2 amplitude to inverted faces, suggesting that structural encoding is harder for inverted faces. An own-race advantage was found, which may reflect the use of configural encoding for the more frequently experienced own-race faces, and feature-based encoding for the less familiar other-race faces, and was reflected in accuracy measures and ERP effects. The midline N2 was larger to configurally encoded faces (i. e., own-race and upright), possibly suggesting configural encoding involves more complex processing than feature-based encoding. An N400-like component was sensitive to feature manipulations, with greater amplitude to other-race than own-race faces and to inverted than upright faces. This effect was interpreted as reflecting increased activation of incompatible representations activated by a feature-based strategy used in processing of other-race and inverted faces. The late positive complex was sensitive to configural manipulation with larger amplitude to other-race than own-race faces, and was interpreted as reflecting the updating of an own-race norm used in face recognition, to incorporate other-race information.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

S RAO, VIBHA, and P. RAMESH NAIDU. "Periocular and Iris Feature Encoding - A Survey." International Journal of Innovative Research in Computer and Communication Engineering 03, no. 01 (2015): 368–74. http://dx.doi.org/10.15680/ijircce.2015.0301023.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

HUO, Lu, and Leijie ZHANG. "Combined feature compression encoding in image retrieval." TURKISH JOURNAL OF ELECTRICAL ENGINEERING & COMPUTER SCIENCES 27, no. 3 (2019): 1603–18. http://dx.doi.org/10.3906/elk-1803-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Lee, Hui-Jin, Ki-Sang Hong, Henry Kang, and Seungyong Lee. "Photo Aesthetics Analysis via DCNN Feature Encoding." IEEE Transactions on Multimedia 19, no. 8 (2017): 1921–32. http://dx.doi.org/10.1109/tmm.2017.2687759.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Більше джерел
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!