Academic literature on the topic 'Vectorial embeddings'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Vectorial embeddings.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Vectorial embeddings"

1

Rydhe, Eskil. "Vectorial Hankel operators, Carleson embeddings, and notions of BMOA." Geometric and Functional Analysis 27, no. 2 (2017): 427–51. http://dx.doi.org/10.1007/s00039-017-0400-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ion, Radu, Vasile Păiș, Verginica Barbu Mititelu, et al. "Unsupervised Word Sense Disambiguation Using Transformer’s Attention Mechanism." Machine Learning and Knowledge Extraction 7, no. 1 (2025): 10. https://doi.org/10.3390/make7010010.

Full text
Abstract:
Transformer models produce advanced text representations that have been used to break through the hard challenge of natural language understanding. Using the Transformer’s attention mechanism, which acts as a language learning memory, trained on tens of billions of words, a word sense disambiguation (WSD) algorithm can now construct a more faithful vectorial representation of the context of a word to be disambiguated. Working with a set of 34 lemmas of nouns, verbs, adjectives and adverbs selected from the National Reference Corpus of Romanian (CoRoLa), we show that using BERT’s attention heads at all hidden layers, we can devise contextual vectors of the target lemma that produce better clusters of lemma’s senses than the ones obtained with standard BERT embeddings. If we automatically translate the Romanian example sentences of the target lemma into English, we show that we can reliably infer the number of senses with which the target lemma appears in the CoRoLa. We also describe an unsupervised WSD algorithm that, using a Romanian BERT model and a few example sentences of the target lemma’s senses, can label the Romanian induced sense clusters with the appropriate sense labels, with an average accuracy of 64%.
APA, Harvard, Vancouver, ISO, and other styles
3

Podda, Marco, Castrense Savojardo, Pier Luigi Martelli, et al. "A descriptor-free machine learning framework to improve antigen discovery for bacterial pathogens." PLOS One 20, no. 6 (2025): e0323895. https://doi.org/10.1371/journal.pone.0323895.

Full text
Abstract:
Identifying protective antigens (PAs), i.e., targets for bacterial vaccines, is challenging as conducting in-vivo tests at the proteome scale is impractical. Reverse Vaccinology (RV) aids in narrowing down the pool of candidates through computational screening of proteomes. Within RV, one prominent approach is to train Machine Learning (ML) models to classify PAs. These models can be used to predict unseen protein sequences and assist researchers in selecting promising candidates. Traditionally, proteins are fed into these models as vectors of biological and physico-chemical descriptors derived from their residue sequences. However, this method relies on multiple third-party software packages, which may be unreliable, difficult to use, or no longer maintained. Furthermore, selecting descriptors is susceptible to biases. Hence, Protein Sequence Embeddings (PSEs)—high-dimensional vectorial representations of protein sequences obtained from pretrained deep neural networks—have emerged as an alternative to descriptors, offering data-driven feature extraction and a streamlined computational pipeline. We introduce PSEs as a descriptor-free representation of protein sequences for ML in RV. We conducted a thorough comparison of PSE-based and descriptor-based pipelines for PA classification across 10 bacterial species evaluated independently. Our results show that the PSE-based pipeline, which leverages the FAIR ESM-2 protein language model, outperformed the descriptor-based pipeline in 9 out of 10 species, with a mean Area Under the Receiver Operating Characteristics curve (AUROC) of 0.875 versus 0.855. Additionally, it achieved superior performance on the iBPA benchmark (0.86 AUROC vs. 0.82) compared to other methods in the literature. Lastly, we applied the pipeline to rank unseen proteomes based on protective potential to guide candidate selection for pre-clinical testing. Compared to the standard RV practice of ranking candidates according to their biological descriptors, our approach reduces the number of pre-clinical tests needed to identify PAs by up to 83% on average.
APA, Harvard, Vancouver, ISO, and other styles
4

Szymański, Piotr. "A broadband multistate interferometer for impedance measurement." Journal of Telecommunications and Information Technology, no. 2 (June 30, 2005): 29–33. http://dx.doi.org/10.26636/jtit.2005.2.311.

Full text
Abstract:
We present a new four-state interferometer for measuring vectorial reflection coefficient from 50 to 1800 MHz. The interferometer is composed of a four-state phase shifter, a double-directional coupler and a spectrum analyzer with an in-built tracking generator. We describe a design of the interferometer and methods developed for its calibration and de-embedding the measurements. Experimental data verify good accuracy of the impedance measurement.
APA, Harvard, Vancouver, ISO, and other styles
5

Hammer, Barbara, and Alexander Hasenfuss. "Topographic Mapping of Large Dissimilarity Data Sets." Neural Computation 22, no. 9 (2010): 2229–84. http://dx.doi.org/10.1162/neco_a_00012.

Full text
Abstract:
Topographic maps such as the self-organizing map (SOM) or neural gas (NG) constitute powerful data mining techniques that allow simultaneously clustering data and inferring their topological structure, such that additional features, for example, browsing, become available. Both methods have been introduced for vectorial data sets; they require a classical feature encoding of information. Often data are available in the form of pairwise distances only, such as arise from a kernel matrix, a graph, or some general dissimilarity measure. In such cases, NG and SOM cannot be applied directly. In this article, we introduce relational topographic maps as an extension of relational clustering algorithms, which offer prototype-based representations of dissimilarity data, to incorporate neighborhood structure. These methods are equivalent to the standard (vectorial) techniques if a Euclidean embedding exists, while preventing the need to explicitly compute such an embedding. Extending these techniques for the general case of non-Euclidean dissimilarities makes possible an interpretation of relational clustering as clustering in pseudo-Euclidean space. We compare the methods to well-known clustering methods for proximity data based on deterministic annealing and discuss how far convergence can be guaranteed in the general case. Relational clustering is quadratic in the number of data points, which makes the algorithms infeasible for huge data sets. We propose an approximate patch version of relational clustering that runs in linear time. The effectiveness of the methods is demonstrated in a number of examples.
APA, Harvard, Vancouver, ISO, and other styles
6

RIESEN, KASPAR, and HORST BUNKE. "GRAPH CLASSIFICATION BASED ON VECTOR SPACE EMBEDDING." International Journal of Pattern Recognition and Artificial Intelligence 23, no. 06 (2009): 1053–81. http://dx.doi.org/10.1142/s021800140900748x.

Full text
Abstract:
Graphs provide us with a powerful and flexible representation formalism for pattern classification. Many classification algorithms have been proposed in the literature. However, the vast majority of these algorithms rely on vectorial data descriptions and cannot directly be applied to graphs. Recently, a growing interest in graph kernel methods can be observed. Graph kernels aim at bridging the gap between the high representational power and flexibility of graphs and the large amount of algorithms available for object representations in terms of feature vectors. In the present paper, we propose an approach transforming graphs into n-dimensional real vectors by means of prototype selection and graph edit distance computation. This approach allows one to build graph kernels in a straightforward way. It is not only applicable to graphs, but also to other kind of symbolic data in conjunction with any kind of dissimilarity measure. Thus it is characterized by a high degree of flexibility. With several experimental results, we prove the robustness and flexibility of our new method and show that our approach outperforms other graph classification methods on several graph data sets of diverse nature.
APA, Harvard, Vancouver, ISO, and other styles
7

Ji, Jiayi, Yunpeng Luo, Xiaoshuai Sun, et al. "Improving Image Captioning by Leveraging Intra- and Inter-layer Global Representation in Transformer Network." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (2021): 1655–63. http://dx.doi.org/10.1609/aaai.v35i2.16258.

Full text
Abstract:
Transformer-based architectures have shown great success in image captioning, where object regions are encoded and then attended into the vectorial representations to guide the caption decoding. However, such vectorial representations only contain region-level information without considering the global information reflecting the entire image, which fails to expand the capability of complex multi-modal reasoning in image captioning. In this paper, we introduce a Global Enhanced Transformer (termed GET) to enable the extraction of a more comprehensive global representation, and then adaptively guide the decoder to generate high-quality captions. In GET, a Global Enhanced Encoder is designed for the embedding of the global feature, and a Global Adaptive Decoder are designed for the guidance of the caption generation. The former models intra- and inter-layer global representation by taking advantage of the proposed Global Enhanced Attention and a layer-wise fusion module. The latter contains a Global Adaptive Controller that can adaptively fuse the global information into the decoder to guide the caption generation. Extensive experiments on MS COCO dataset demonstrate the superiority of our GET over many state-of-the-arts.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhu, Huiming, Chunhui He, Yang Fang, Bin Ge, Meng Xing, and Weidong Xiao. "Patent Automatic Classification Based on Symmetric Hierarchical Convolution Neural Network." Symmetry 12, no. 2 (2020): 186. http://dx.doi.org/10.3390/sym12020186.

Full text
Abstract:
With the rapid growth of patent applications, it has become an urgent problem to automatically classify the accepted patent application documents accurately and quickly. Most previous patent automatic classification studies are based on feature engineering and traditional machine learning methods like SVM, and some even rely on the knowledge of domain experts, hence they suffer from low accuracy problem and have poor generalization ability. In this paper, we propose a patent automatic classification method via the symmetric hierarchical convolution neural network (CNN) named PAC-HCNN. We use the title and abstract of the patent as the input data, and then apply the word embedding technique to segment and vectorize the input data. Then we design a symmetric hierarchical CNN framework to classify the patents based on the word embeddings, which is much more efficient than traditional RNN models dealing with texts, meanwhile keeping the history and future information of the input sequence. We also add gated linear units (GLUs) and residual connection to help realize the deep CNN. Additionally, we equip our model with a self attention mechanism to address the long-term dependency problem. Experiments are performed on large-scale datasets for Chinese short text patent classification. Experimental results prove our proposed model’s effectiveness, and it performs better than other state-of-the-art models significantly and consistently on both fine-grained and coarse-grained classification.
APA, Harvard, Vancouver, ISO, and other styles
9

Dutta, Anjan, Pau Riba, Josep Lladós, and Alicia Fornés. "Hierarchical stochastic graphlet embedding for graph-based pattern recognition." Neural Computing and Applications 32, no. 15 (2019): 11579–96. http://dx.doi.org/10.1007/s00521-019-04642-7.

Full text
Abstract:
AbstractDespite being very successful within the pattern recognition and machine learning community, graph-based methods are often unusable because of the lack of mathematical operations defined in graph domain. Graph embedding, which maps graphs to a vectorial space, has been proposed as a way to tackle these difficulties enabling the use of standard machine learning techniques. However, it is well known that graph embedding functions usually suffer from the loss of structural information. In this paper, we consider the hierarchical structure of a graph as a way to mitigate this loss of information. The hierarchical structure is constructed by topologically clustering the graph nodes and considering each cluster as a node in the upper hierarchical level. Once this hierarchical structure is constructed, we consider several configurations to define the mapping into a vector space given a classical graph embedding, in particular, we propose to make use of the stochastic graphlet embedding (SGE). Broadly speaking, SGE produces a distribution of uniformly sampled low-to-high-order graphlets as a way to embed graphs into the vector space. In what follows, the coarse-to-fine structure of a graph hierarchy and the statistics fetched by the SGE complements each other and includes important structural information with varied contexts. Altogether, these two techniques substantially cope with the usual information loss involved in graph embedding techniques, obtaining a more robust graph representation. This fact has been corroborated through a detailed experimental evaluation on various benchmark graph datasets, where we outperform the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Szemenyei, Márton, and Ferenc Vajda. "3D Object Detection and Scene Optimization for Tangible Augmented Reality." Periodica Polytechnica Electrical Engineering and Computer Science 62, no. 2 (2018): 25–37. http://dx.doi.org/10.3311/ppee.10482.

Full text
Abstract:
Object recognition in 3D scenes is one of the fundamental tasks in computer vision. It is used frequently in robotics or augmented reality applications [1]. In our work we intend to apply 3D shape recognition to create a Tangible Augmented Reality system that is able to pair virtual and real objects in natural indoors scenes. In this paper we present a method for arranging virtual objects in a real-world scene based on primitive shape graphs. For our scheme, we propose a graph node embedding algorithm for graphs with vectorial nodes and edges, and genetic operators designed to improve the quality of the global setup of virtual objects. We show that our methods improve the quality of the arrangement significantly.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Vectorial embeddings"

1

Cvetkov-Iliev, Alexis. "Embedding models for relational data analytics." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG004.

Full text
Abstract:
L'analyse de données, par exemple via des modèles d'apprentissage automatique, requiert généralement qu'elles soient regroupées en une table unique décrivant les entités analysées par un nombre fixe d'attributs ou features. En pratique cependant, la plupart des jeux de données sont relationnels (cf. bases de données relationnelles et graphes de connaissance), où l'information sur les entités d'intérêt est irrégulière et dispersée à travers plusieurs sources. Pour analyser de telles données, il est alors nécessaire de les assembler dans une structure unique (généralement une table), ce qui demande du temps et de l'expertise. À la place, nous étudions dans cette thèse le potentiel des modèles d'embedding pour faciliter l'assemblage et l'intégration de données relationnelles. Nous nous intéressons particulièrement aux deux problèmes suivants : 1) l'appariement d'entités (par exemple "Paris" et "Paris, FR"), qui est souvent nécessaire lorsque les données proviennent de sources ayant des manières différentes de représenter la même information ; et 2) le feature engineering sur des données relationnelles pour enrichir l'analyse de données avec de l'information externe. Enfin, nous montrons que les modèles d'embedding sont des outils prometteurs pour l'analyse de données relationnelles : 1) utiliser de "bonnes" représentations vectorielles (i.e. embeddings) d'entités peut remplacer l'appariement manuel d'entités, sans compromettre la qualité des analyses en aval ; et 2) apprendre des embeddings d'entités directement sur des données relationnelles est un moyen efficace et applicable à de grands jeu de données d'automatiser le feature engineering. Ceci ouvre la voie vers l'apprentissage de représentations généralistes d'entités, facilement utilisables dans de nombreuses applications<br>Analytical pipelines, such as those relying on machine learning models, typically require data in the form of a single table describing the entities under study with a fixed set of attributes or features. In practice however, data often come as relational data (e.g. relational databases or knowledge graphs), where information on the entities of interest is irregular and scattered across sources. To leverage this relational data, it must thus be assembled into a format suitable for analysis, which requires time and expertise from the analyst. As an alternative, we investigate in this thesis the potential of embedding models to facilitate relational data assembling. We especially consider two data integration problems: 1) entity matching (e.g. linking "Paris" and "Paris, FR") when dealing with non-normalized data sources that have different knowledge-representation conventions; and 2) feature engineering over relational data to enrich data analyses with background information. Finally, we show that embedding models are indeed promising tools for relational data analytics: 1) "good" vectorial representations (a.k.a. embeddings) of entities can replace manual entity matching without hindering the quality of subsequent analyses; and 2) entity embeddings learned directly over relational data can automate feature engineering in an efficient and scalable way, paving the way for general-purpose representations that can bring background information in various downstream tasks
APA, Harvard, Vancouver, ISO, and other styles
2

Chinea, Ríos Mara. "Advanced techniques for domain adaptation in Statistical Machine Translation." Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/117611.

Full text
Abstract:
[ES] La Traducción Automática Estadística es un sup-campo de la lingüística computacional que investiga como emplear los ordenadores en el proceso de traducción de un texto de un lenguaje humano a otro. La traducción automática estadística es el enfoque más popular que se emplea para construir estos sistemas de traducción automáticos. La calidad de dichos sistemas depende en gran medida de los ejemplos de traducción que se emplean durante los procesos de entrenamiento y adaptación de los modelos. Los conjuntos de datos empleados son obtenidos a partir de una gran variedad de fuentes y en muchos casos puede que no tengamos a mano los datos más adecuados para un dominio específico. Dado este problema de carencia de datos, la idea principal para solucionarlo es encontrar aquellos conjuntos de datos más adecuados para entrenar o adaptar un sistema de traducción. En este sentido, esta tesis propone un conjunto de técnicas de selección de datos que identifican los datos bilingües más relevantes para una tarea extraídos de un gran conjunto de datos. Como primer paso en esta tesis, las técnicas de selección de datos son aplicadas para mejorar la calidad de la traducción de los sistemas de traducción bajo el paradigma basado en frases. Estas técnicas se basan en el concepto de representación continua de las palabras o las oraciones en un espacio vectorial. Los resultados experimentales demuestran que las técnicas utilizadas son efectivas para diferentes lenguajes y dominios. El paradigma de Traducción Automática Neuronal también fue aplicado en esta tesis. Dentro de este paradigma, investigamos la aplicación que pueden tener las técnicas de selección de datos anteriormente validadas en el paradigma basado en frases. El trabajo realizado se centró en la utilización de dos tareas diferentes de adaptación del sistema. Por un lado, investigamos cómo aumentar la calidad de traducción del sistema, aumentando el tamaño del conjunto de entrenamiento. Por otro lado, el método de selección de datos se empleó para crear un conjunto de datos sintéticos. Los experimentos se realizaron para diferentes dominios y los resultados de traducción obtenidos son convincentes para ambas tareas. Finalmente, cabe señalar que las técnicas desarrolladas y presentadas a lo largo de esta tesis pueden implementarse fácilmente dentro de un escenario de traducción real.<br>[CAT] La Traducció Automàtica Estadística és un sup-camp de la lingüística computacional que investiga com emprar els ordinadors en el procés de traducció d'un text d'un llenguatge humà a un altre. La traducció automàtica estadística és l'enfocament més popular que s'empra per a construir aquests sistemes de traducció automàtics. La qualitat d'aquests sistemes depèn en gran mesura dels exemples de traducció que s'empren durant els processos d'entrenament i adaptació dels models. Els conjunts de dades emprades són obtinguts a partir d'una gran varietat de fonts i en molts casos pot ser que no tinguem a mà les dades més adequades per a un domini específic. Donat aquest problema de manca de dades, la idea principal per a solucionar-ho és trobar aquells conjunts de dades més adequades per a entrenar o adaptar un sistema de traducció. En aquest sentit, aquesta tesi proposa un conjunt de tècniques de selecció de dades que identifiquen les dades bilingües més rellevants per a una tasca extrets d'un gran conjunt de dades. Com a primer pas en aquesta tesi, les tècniques de selecció de dades són aplicades per a millorar la qualitat de la traducció dels sistemes de traducció sota el paradigma basat en frases. Aquestes tècniques es basen en el concepte de representació contínua de les paraules o les oracions en un espai vectorial. Els resultats experimentals demostren que les tècniques utilitzades són efectives per a diferents llenguatges i dominis. El paradigma de Traducció Automàtica Neuronal també va ser aplicat en aquesta tesi. Dins d'aquest paradigma, investiguem l'aplicació que poden tenir les tècniques de selecció de dades anteriorment validades en el paradigma basat en frases. El treball realitzat es va centrar en la utilització de dues tasques diferents. D'una banda, investiguem com augmentar la qualitat de traducció del sistema, augmentant la grandària del conjunt d'entrenament. D'altra banda, el mètode de selecció de dades es va emprar per a crear un conjunt de dades sintètiques. Els experiments es van realitzar per a diferents dominis i els resultats de traducció obtinguts són convincents per a ambdues tasques. Finalment, cal assenyalar que les tècniques desenvolupades i presentades al llarg d'aquesta tesi poden implementar-se fàcilment dins d'un escenari de traducció real.<br>[EN] La Traducció Automàtica Estadística és un sup-camp de la lingüística computacional que investiga com emprar els ordinadors en el procés de traducció d'un text d'un llenguatge humà a un altre. La traducció automàtica estadística és l'enfocament més popular que s'empra per a construir aquests sistemes de traducció automàtics. La qualitat d'aquests sistemes depèn en gran mesura dels exemples de traducció que s'empren durant els processos d'entrenament i adaptació dels models. Els conjunts de dades emprades són obtinguts a partir d'una gran varietat de fonts i en molts casos pot ser que no tinguem a mà les dades més adequades per a un domini específic. Donat aquest problema de manca de dades, la idea principal per a solucionar-ho és trobar aquells conjunts de dades més adequades per a entrenar o adaptar un sistema de traducció. En aquest sentit, aquesta tesi proposa un conjunt de tècniques de selecció de dades que identifiquen les dades bilingües més rellevants per a una tasca extrets d'un gran conjunt de dades. Com a primer pas en aquesta tesi, les tècniques de selecció de dades són aplicades per a millorar la qualitat de la traducció dels sistemes de traducció sota el paradigma basat en frases. Aquestes tècniques es basen en el concepte de representació contínua de les paraules o les oracions en un espai vectorial. Els resultats experimentals demostren que les tècniques utilitzades són efectives per a diferents llenguatges i dominis. El paradigma de Traducció Automàtica Neuronal també va ser aplicat en aquesta tesi. Dins d'aquest paradigma, investiguem l'aplicació que poden tenir les tècniques de selecció de dades anteriorment validades en el paradigma basat en frases. El treball realitzat es va centrar en la utilització de dues tasques diferents d'adaptació del sistema. D'una banda, investiguem com augmentar la qualitat de traducció del sistema, augmentant la grandària del conjunt d'entrenament. D'altra banda, el mètode de selecció de dades es va emprar per a crear un conjunt de dades sintètiques. Els experiments es van realitzar per a diferents dominis i els resultats de traducció obtinguts són convincents per a ambdues tasques. Finalment, cal assenyalar que les tècniques desenvolupades i presentades al llarg d'aquesta tesi poden implementar-se fàcilment dins d'un escenari de traducció real.<br>Chinea Ríos, M. (2019). Advanced techniques for domain adaptation in Statistical Machine Translation [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/117611<br>TESIS
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Vectorial embeddings"

1

Guo, Yi, Junbin Gao, and Paul W. Kwan. "Regularized Kernel Local Linear Embedding on Dimensionality Reduction for Non-vectorial Data." In AI 2009: Advances in Artificial Intelligence. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-10439-8_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yona, Golan. "Embedding Algorithms and Vectorial Representations." In Introduction to Computational Proteomics. Chapman and Hall/CRC, 2010. http://dx.doi.org/10.1201/9781420010770-11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nejadgholi Isar, Bougueng Renaud, and Witherspoon Samuel. "A Semi-Supervised Training Method for Semantic Search of Legal Facts in Canadian Immigration Cases." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2017. https://doi.org/10.3233/978-1-61499-838-9-125.

Full text
Abstract:
A semi-supervised approach was introduced to develop a semantic search system, capable of finding legal cases whose fact-asserting sentences are similar to a given query, in a large legal corpus. First, an unsupervised word embedding model learns the meaning of legal words from a large immigration law corpus. Then this knowledge is used to initiate the training of a fact detecting classifier with a small set of annotated legal cases. We achieved 90% accuracy in detecting fact sentences, where only 150 annotated documents were available. The hidden layer of the trained classifier is used to vectorize sentences and calculate cosine similarity between fact-asserting sentences and the given queries. We reached 78% mean average precision score in searching semantically similar sentences.
APA, Harvard, Vancouver, ISO, and other styles
4

de Hoop Adrianus T. "Array-structure theory of Maxwell wavefields in affine (3 + 1)-spacetime: An overview." In Pulsed Electromagnetic Fields: Their Potentialities, Computation and Evaluation. IOS Press, 2013. https://doi.org/10.3233/978-1-61499-230-1-21.

Full text
Abstract:
An array-structure theory of Maxwell wavefields in affine (3 + 1)-spacetime is presented. The structure is designed to supersede the conventional Gibbs vector calculus and Heaviside vectorial Maxwell equations formulations, deviates from the Einstein view on spacetime as having a metrical structure (with the, non-definite, Lorentz metric), and adheres to the Weyl view where spacetime is conceived as being affine in nature. In the theory, the electric field and source quantities are introduced as one-dimensional arrays and the magnetic field and source quantities as antisymmetrical two-dimensional arrays. Time-convolution and time-correlation field/source reciprocity are discussed, and expressions for the wavefield radiated by sources in an unbounded, homogeneous, isotropic, lossless embedding are derived. These expressions clearly exhibit their structure as convolutions in spacetime. The bookkeeping of the array structure smoothly fits the input requirements of computational software packages. An interesting result of fundamental physical importance is that the &amp;lsquo;magnetic charge&amp;rsquo; appears as a completely antisymmetrical three-dimensional array rather than as a number (as in the Dirac quantum theory of the magnetic monopole). The generalization of the array structure to affine (N + 1)-spacetime with N &amp;gt; 3 is straightforward and is conjectured to serve a purpose in theoretical cosmology. No particular &amp;lsquo;orientation&amp;rsquo; of the observer's spatial reference frame (like the &amp;lsquo;right-handedness&amp;rsquo; in conventional vector calculus) is required.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Vectorial embeddings"

1

Aoun, Paulo Henrique Calado, Andre C. A. Nascimento, and Adenilton J. Da Silva. "Evaluation of Dimensionality Reduction and Truncation Techniques for Word Embeddings." In XV Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2018. http://dx.doi.org/10.5753/eniac.2018.4477.

Full text
Abstract:
The use of word embeddings is becoming very common in many Natural Language Processing tasks. Most of the time, these require computacional resources that can not be found in most part of the current mobile devices. In this work, we evaluate a combination of numeric truncation and dimensionality reduction strategies in order to obtain smaller vectorial representations without substancial losses in performance.
APA, Harvard, Vancouver, ISO, and other styles
2

Gutierrez-Vasquez, Ximena, and Victor Mijangos. "Low-resource bilingual lexicon extraction using graph based word embeddings." In LatinX in AI at Neural Information Processing Systems Conference 2018. Journal of LatinX in AI Research, 2018. http://dx.doi.org/10.52591/lxai2018120323.

Full text
Abstract:
In this work we focus on the task of automatically extracting bilingual lexicon for the language pair Spanish-Nahuatl. This is a low-resource setting where only a small amount of parallel corpus is available. Most of the downstream methods do not work well under low-resources conditions. This is specially true for the approaches that use vectorial representations like Word2Vec. Our proposal is to construct bilingual word vectors from a graph. This graph is generated using translation pairs obtained from an unsupervised word alignment method. We show that, in a low-resource setting, these type of vectors are successful in representing words in a bilingual semantic space. Moreover, when a linear transformation is applied to translate words from one language to another, our graph based representations considerably outperform the popular setting that uses Word2Vec.
APA, Harvard, Vancouver, ISO, and other styles
3

Guo, Yi, Junbin Gao, and Paul Kwan. "Visualization of Non-vectorial Data Using Twin Kernel Embedding." In 2006 International Workshop on Integrating AI and Data Mining. IEEE, 2006. http://dx.doi.org/10.1109/aidm.2006.18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Guo, Yi, Junbin Gao, and Paul W. Kwan. "Learning Out-Of Sample Mapping in Non-Vectorial Data Reduction using Constrained Twin Kernel Embedding." In 2007 International Conference on Machine Learning and Cybernetics. IEEE, 2007. http://dx.doi.org/10.1109/icmlc.2007.4370108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

K R, Sushmitha, Rangalakshmi G R, and Suguna A. "Sentiment Analysis of Incoming Voice Calls." In International Conference on Recent Trends in Computing & Communication Technologies (ICRCCT’2K24). International Journal of Advanced Trends in Engineering and Management, 2024. http://dx.doi.org/10.59544/bisl3666/icrcct24p19.

Full text
Abstract:
This project aims to meet the increasing need for real time sentiment analysis within voice call interactions, acknowledging the rising significance of voice based engagements in today’s telecommunications realm. For instance, pre trained word embeddings, such as Word2Vec, Glove, and bidirectional encoder representations from transformers (BERT), generate vectors by considering word distances, similarities, and occurrences ignoring other aspects such as word sentiment orientation. Aiming at such limitations, this paper presents a sentiment classification model (named LeBERT) combining sentiment lexicon, N grams, BERT, and CNN. In the model, sentiment lexicon, N grams, and BERT are used to vectorize words selected from a section of the input text. CNN is used as the deep neural network classifier for feature mapping and giving the output sentiment class. The proposed model is evaluated on three public datasets, namely, Amazon products’ reviews, Imbd movies’ reviews, and Yelp restaurants’ reviews datasets
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!