To see the other types of publications on this topic, follow the link: Embedding types.

Journal articles on the topic 'Embedding types'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Embedding types.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Cárdenas, M., T. Fernández, F. F. Lasheras, and A. Quintero. "Embedding proper homotopy types." Colloquium Mathematicum 95, no. 1 (2003): 1–20. http://dx.doi.org/10.4064/cm95-1-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Peng, Jing Zhou, Yuzhang Liu, and Xingchen Zhou. "TransET: Knowledge Graph Embedding with Entity Types." Electronics 10, no. 12 (June 11, 2021): 1407. http://dx.doi.org/10.3390/electronics10121407.

Full text
Abstract:
Knowledge graph embedding aims to embed entities and relations into low-dimensional vector spaces. Most existing methods only focus on triple facts in knowledge graphs. In addition, models based on translation or distance measurement cannot fully represent complex relations. As well-constructed prior knowledge, entity types can be employed to learn the representations of entities and relations. In this paper, we propose a novel knowledge graph embedding model named TransET, which takes advantage of entity types to learn more semantic features. More specifically, circle convolution based on the embeddings of entity and entity types is utilized to map head entity and tail entity to type-specific representations, then translation-based score function is used to learn the presentation triples. We evaluated our model on real-world datasets with two benchmark tasks of link prediction and triple classification. Experimental results demonstrate that it outperforms state-of-the-art models in most cases.
APA, Harvard, Vancouver, ISO, and other styles
3

Hebda, James J. "The possible cohomology of certain types of taut submanifolds." Nagoya Mathematical Journal 111 (September 1988): 85–97. http://dx.doi.org/10.1017/s0027763000001008.

Full text
Abstract:
The first purpose of this paper is to exhibit several families of compact manifolds that do not ad nit taut embeddings into any sphere. The second is to enumerate ths possible Z2-cohomology rings of those compact manifolds which do admit a taut embedding and whose cohomology rings satisfy certain degeneracy conditions. The first purpose is easily attained once the second has been accomplished, for it is a simple matter to present families of spaces whose cohomology rings satisfy the required degeneracy conditions, but are not on the list of those admitting a taut embedding.
APA, Harvard, Vancouver, ISO, and other styles
4

Bocchi, Laura, and Romain Demangeon. "Embedding Session Types in HML." Electronic Proceedings in Theoretical Computer Science 137 (December 8, 2013): 53–62. http://dx.doi.org/10.4204/eptcs.137.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lindley, Sam, and J. Garrett Morris. "Embedding session types in Haskell." ACM SIGPLAN Notices 51, no. 12 (July 19, 2018): 133–45. http://dx.doi.org/10.1145/3241625.2976018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

BOZKURT, ILKER NADI, HAI HUANG, BRUCE MAGGS, ANDRÉA RICHA, and MAVERICK WOO. "Mutual Embeddings." Journal of Interconnection Networks 15, no. 01n02 (March 2015): 1550001. http://dx.doi.org/10.1142/s0219265915500012.

Full text
Abstract:
This paper introduces a type of graph embedding called a mutual embedding. A mutual embedding between two n-node graphs [Formula: see text] and [Formula: see text] is an identification of the vertices of V1 and V2, i.e., a bijection [Formula: see text], together with an embedding of G1 into G2 and an embedding of G2 into G1 where in the embedding of G1 into G2, each node u of G1 is mapped to π(u) in G2 and in the embedding of G2 into G1 each node v of G2 is mapped to [Formula: see text] in G1. The identification of vertices in G1 and G2 constrains the two embeddings so that it is not always possible for both to exhibit small congestion and dilation, even if there are traditional one-way embeddings in both directions with small congestion and dilation. Mutual embeddings arise in the context of finding preconditioners for accelerating the convergence of iterative methods for solving systems of linear equations. We present mutual embeddings between several types of graphs such as linear arrays, cycles, trees, and meshes, prove lower bounds on mutual embeddings between several classes of graphs, and present some open problems related to optimal mutual embeddings.
APA, Harvard, Vancouver, ISO, and other styles
7

LUDWIG, LEWIS D., and PAMELA ARBISI. "LINKING IN STRAIGHT-EDGE EMBEDDINGS OF K7." Journal of Knot Theory and Its Ramifications 19, no. 11 (November 2010): 1431–47. http://dx.doi.org/10.1142/s0218216510008467.

Full text
Abstract:
In 1983 Conway and Gordon and Sachs proved that every embedding of the complete graph on six vertices, K6, is intrinsically linked. In 2004 it was shown that all straight-edge embeddings of K6 have either one or three linked triangle pairs. We expand this work to characterize the straight-edge embeddings of K7 and determine the number and types of links in every embedding which forms a convex polyhedron of seven vertices.
APA, Harvard, Vancouver, ISO, and other styles
8

Park, Chanyoung, Donghyun Kim, Jiawei Han, and Hwanjo Yu. "Unsupervised Attributed Multiplex Network Embedding." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5371–78. http://dx.doi.org/10.1609/aaai.v34i04.5985.

Full text
Abstract:
Nodes in a multiplex network are connected by multiple types of relations. However, most existing network embedding methods assume that only a single type of relation exists between nodes. Even for those that consider the multiplexity of a network, they overlook node attributes, resort to node labels for training, and fail to model the global properties of a graph. We present a simple yet effective unsupervised network embedding method for attributed multiplex network called DMGI, inspired by Deep Graph Infomax (DGI) that maximizes the mutual information between local patches of a graph, and the global representation of the entire graph. We devise a systematic way to jointly integrate the node embeddings from multiple graphs by introducing 1) the consensus regularization framework that minimizes the disagreements among the relation-type specific node embeddings, and 2) the universal discriminator that discriminates true samples regardless of the relation types. We also show that the attention mechanism infers the importance of each relation type, and thus can be useful for filtering unnecessary relation types as a preprocessing step. Extensive experiments on various downstream tasks demonstrate that DMGI outperforms the state-of-the-art methods, even though DMGI is fully unsupervised.
APA, Harvard, Vancouver, ISO, and other styles
9

KARLSSON, FRED. "Constraints on multiple center-embedding of clauses." Journal of Linguistics 43, no. 2 (June 18, 2007): 365–92. http://dx.doi.org/10.1017/s0022226707004616.

Full text
Abstract:
A common view in theoretical syntax and computational linguistics holds that there are no grammatical restrictions on multiple center-embedding of clauses. Syntax would thus be characterized by unbounded recursion. An analysis of 119 genuine multiple clausal center-embeddings from seven ‘Standard Average European’ languages (English, Finnish, French, German, Latin, Swedish, Danish) uncovers usage-based regularities, constraints, that run counter to these and several other widely held views, such as that any type of multiple self-embedding (of the same clause type) would be possible, or that self-embedding would be more complex than multiple center-embedding of different clause types. The maximal degree of center-embedding in written language is three. In spoken language, multiple center-embedding is practically absent. Typical center-embeddings of any degree involve relative clauses specifying the referent of the subject NP of the superordinate clause. Only postmodifying clauses, especially relative clauses and that-clauses acting as noun complements, allow central self-embedding. Double relativization of objects (The rat the cat the dog chased killed ate the malt) does not occur. These corpus-based ‘soft constraints’ suggest that full-blown recursion creating multiple clausal center-embedding is not a central design feature of language in use. Multiple center-embedding emerged with the advent of written language, with Aristotle, Cicero, and Livy in the Greek and Latin stylistic tradition of ‘periodic’ sentence composition.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Yu, Jiayue Hou, Zongjian Yu, Yun Zhang, and Qing Li. "Confidence-Aware Embedding for Knowledge Graph Entity Typing." Complexity 2021 (April 16, 2021): 1–8. http://dx.doi.org/10.1155/2021/3473849.

Full text
Abstract:
Knowledge graphs (KGs) entity typing aims to predict the potential types to an entity, that is, (entity, entity type = ?). Recently, several embedding models are proposed for KG entity types prediction according to the existing typing information of the (entity, entity type) tuples in KGs. However, most of them unreasonably assume that all existing entity typing instances in KGs are completely correct, which ignore the nonnegligible entity type noises and may lead to potential errors for the downstream tasks. To address this problem, we propose ConfE, a novel confidence-aware embedding approach for modeling the (entity, entity type) tuples, which takes tuple confidence into consideration for learning better embeddings. Specifically, we learn the embeddings of entities and entity types in separate entity space and entity type space since they are different objects in KGs. We utilize an asymmetric matrix to specify the interaction of their embeddings and incorporate the tuple confidence as well. To make the tuple confidence more universal, we consider only the internal structural information in existing KGs. We evaluate our model on two tasks, including entity type noise detection and entity type prediction. The extensive experimental results in two public benchmark datasets (i.e., FB15kET and YAGO43kET) demonstrate that our proposed model outperforms all baselines on all tasks, which verify the effectiveness of ConfE in learning better embeddings on noisy KGs. The source code and data of this work can be obtained from https://github.com/swufenlp/ConfE.
APA, Harvard, Vancouver, ISO, and other styles
11

Makarov, Ilya, Dmitrii Kiselev, Nikita Nikitinsky, and Lovro Subelj. "Survey on graph embeddings and their applications to machine learning problems on graphs." PeerJ Computer Science 7 (February 4, 2021): e357. http://dx.doi.org/10.7717/peerj-cs.357.

Full text
Abstract:
Dealing with relational data always required significant computational resources, domain expertise and task-dependent feature engineering to incorporate structural information into a predictive model. Nowadays, a family of automated graph feature engineering techniques has been proposed in different streams of literature. So-called graph embeddings provide a powerful tool to construct vectorized feature spaces for graphs and their components, such as nodes, edges and subgraphs under preserving inner graph properties. Using the constructed feature spaces, many machine learning problems on graphs can be solved via standard frameworks suitable for vectorized feature representation. Our survey aims to describe the core concepts of graph embeddings and provide several taxonomies for their description. First, we start with the methodological approach and extract three types of graph embedding models based on matrix factorization, random-walks and deep learning approaches. Next, we describe how different types of networks impact the ability of models to incorporate structural and attributed data into a unified embedding. Going further, we perform a thorough evaluation of graph embedding applications to machine learning problems on graphs, among which are node classification, link prediction, clustering, visualization, compression, and a family of the whole graph embedding algorithms suitable for graph classification, similarity and alignment problems. Finally, we overview the existing applications of graph embeddings to computer science domains, formulate open problems and provide experiment results, explaining how different networks properties result in graph embeddings quality in the four classic machine learning problems on graphs, such as node classification, link prediction, clustering and graph visualization. As a result, our survey covers a new rapidly growing field of network feature engineering, presents an in-depth analysis of models based on network types, and overviews a wide range of applications to machine learning problems on graphs.
APA, Harvard, Vancouver, ISO, and other styles
12

Bruyn, Bart De. "Pseudo-embeddings of the (point, k-spaces)-geometry of PG(n, 2) and projective embeddings of DW(2n − 1, 2)." Advances in Geometry 19, no. 1 (January 28, 2019): 41–56. http://dx.doi.org/10.1515/advgeom-2017-0065.

Full text
Abstract:
Abstract We classify all homogeneous pseudo-embeddings of the point-line geometry defined by the points and k-dimensional subspaces of PG(n, 2), and use this to study the local structure of homogeneous full projective embeddings of the dual polar space DW(2n − 1, 2). Our investigation allows us to distinguish n possible types for such homogeneous embeddings. For each of these n types, we construct a homogeneous full projective embedding of DW(2n − 1, 2).
APA, Harvard, Vancouver, ISO, and other styles
13

Mao, Xingliang, Shuai Chang, Jinjing Shi, Fangfang Li, and Ronghua Shi. "Sentiment-Aware Word Embedding for Emotion Classification." Applied Sciences 9, no. 7 (March 29, 2019): 1334. http://dx.doi.org/10.3390/app9071334.

Full text
Abstract:
Word embeddings are effective intermediate representations for capturing semantic regularities between words in natural language processing (NLP) tasks. We propose sentiment-aware word embedding for emotional classification, which consists of integrating sentiment evidence within the emotional embedding component of a term vector. We take advantage of the multiple types of emotional knowledge, just as the existing emotional lexicon, to build emotional word vectors to represent emotional information. Then the emotional word vector is combined with the traditional word embedding to construct the hybrid representation, which contains semantic and emotional information as the inputs of the emotion classification experiments. Our method maintains the interpretability of word embeddings, and leverages external emotional information in addition to input text sequences. Extensive results on several machine learning models show that the proposed methods can improve the accuracy of emotion classification tasks.
APA, Harvard, Vancouver, ISO, and other styles
14

Hernández, Luis Javier, and Timothy Porter. "An embedding theorem for proper n-types." Topology and its Applications 48, no. 3 (December 1992): 215–33. http://dx.doi.org/10.1016/0166-8641(92)90143-n.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bryant, Shannon, and Diti Bhadra. "Situation types in complementation: Oromo attitude predication." Semantics and Linguistic Theory 30 (March 2, 2021): 83. http://dx.doi.org/10.3765/salt.v30i0.4806.

Full text
Abstract:
Though languages show rich variation in the clausal embedding strategies employed in attitude reports, most mainstream formal semantic theories of attitudes assume that the clausal complement of an attitude verb contributes at least a proposition to the semantics. The goal of this paper is to contribute to the growing cross-linguistic perspective of attitudes by providing semantic analyses for the two embedding strategies found with attitude verbs in Oromo (Cushitic): verbal nominalization, and embedding under akka 'as'. We argue that Oromo exemplifies a system in which non-speech attitudes uniformly embed situations rather than propositions, thereby expanding the empirical landscape of attitude reports in two ways: (i) situations and propositions are both ontological primitives used by languages in the construction of attitude reports, and (ii) attitude verbs in languages like Oromo do the semantic heavy lifting, contributing the "proposition" to propositional attitudes.
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Yueyang, Ziheng Duan, Binbing Liao, Fei Wu, and Yueting Zhuang. "Heterogeneous Attributed Network Embedding with Graph Convolutional Networks." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 10061–62. http://dx.doi.org/10.1609/aaai.v33i01.330110061.

Full text
Abstract:
Network embedding which assigns nodes in networks to lowdimensional representations has received increasing attention in recent years. However, most existing approaches, especially the spectral-based methods, only consider the attributes in homogeneous networks. They are weak for heterogeneous attributed networks that involve different node types as well as rich node attributes and are common in real-world scenarios. In this paper, we propose HANE, a novel network embedding method based on Graph Convolutional Networks, that leverages both the heterogeneity and the node attributes to generate high-quality embeddings. The experiments on the real-world dataset show the effectiveness of our method.
APA, Harvard, Vancouver, ISO, and other styles
17

Millar, Terrence. "Model completions and omitting types." Journal of Symbolic Logic 60, no. 2 (June 1995): 654–72. http://dx.doi.org/10.2307/2275856.

Full text
Abstract:
AbstractUniversal theories with model completions are characterized. A new omitting types theorem is proved. These two results are used to prove the existence of a universal ℵ0-categorical partial order with an interesting embedding property. Other aspects of these results also are considered.
APA, Harvard, Vancouver, ISO, and other styles
18

RAMSEY, NORMAN. "Embedding an interpreted language using higher-order functions and types." Journal of Functional Programming 21, no. 6 (September 29, 2011): 585–615. http://dx.doi.org/10.1017/s0956796811000219.

Full text
Abstract:
AbstractUsing an embedded, interpreted language to control a complicated application can have significant software-engineering benefits. But existing interpreters are designed for embedding into C code. To embed an interpreter into a different language requires an API suited to that language. This paper presents Lua-ML, a new API that is suited to languages that provide higher-order functions and types. The API exploits higher-order functions and types to reduce the amount of glue code needed to use an embedded interpreter. Where embedding in C requires a special-purpose “glue function” for every function to be embedded, embedding in Lua-ML requires only a description of each function's type. Lua-ML also makes it easy to define a Lua function whose behavior depends on the number and types of its arguments.
APA, Harvard, Vancouver, ISO, and other styles
19

Bandyopadhyay, Sambaran, N. Lokesh, and M. N. Murty. "Outlier Aware Network Embedding for Attributed Networks." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 12–19. http://dx.doi.org/10.1609/aaai.v33i01.330112.

Full text
Abstract:
Attributed network embedding has received much interest from the research community as most of the networks come with some content in each node, which is also known as node attributes. Existing attributed network approaches work well when the network is consistent in structure and attributes, and nodes behave as expected. But real world networks often have anomalous nodes. Typically these outliers, being relatively unexplainable, affect the embeddings of other nodes in the network. Thus all the downstream network mining tasks fail miserably in the presence of such outliers. Hence an integrated approach to detect anomalies and reduce their overall effect on the network embedding is required.Towards this end, we propose an unsupervised outlier aware network embedding algorithm (ONE) for attributed networks, which minimizes the effect of the outlier nodes, and hence generates robust network embeddings. We align and jointly optimize the loss functions coming from structure and attributes of the network. To the best of our knowledge, this is the first generic network embedding approach which incorporates the effect of outliers for an attributed network without any supervision. We experimented on publicly available real networks and manually planted different types of outliers to check the performance of the proposed algorithm. Results demonstrate the superiority of our approach to detect the network outliers compared to the state-of-the-art approaches. We also consider different downstream machine learning applications on networks to show the efficiency of ONE as a generic network embedding technique. The source code is made available at https://github.com/sambaranban/ONE.
APA, Harvard, Vancouver, ISO, and other styles
20

Wei, Xiaojun. "Technology Embedding of Classroom Teaching: Types, Issues and Responses." Open Journal of Applied Sciences 10, no. 06 (2020): 409–15. http://dx.doi.org/10.4236/ojapps.2020.106028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Wan, Liangxia, and Yanpei Liu. "Orientable embedding genus distribution for certain types of graphs." Journal of Combinatorial Theory, Series B 98, no. 1 (January 2008): 19–32. http://dx.doi.org/10.1016/j.jctb.2007.04.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Gitman, Victoria. "Ramsey-like cardinals." Journal of Symbolic Logic 76, no. 2 (June 2011): 519–40. http://dx.doi.org/10.2178/jsl/1305810762.

Full text
Abstract:
AbstractOne of the numerous characterizations of a Ramsey cardinal κ involves the existence of certain types of elementary embeddings for transitive sets of size κ satisfying a large fragment of ZFC. We introduce new large cardinal axioms generalizing the Ramsey elementary embeddings characterization and show that they form a natural hierarchy between weakly compact cardinals and measurable cardinals. These new axioms serve to further our knowledge about the elementary embedding properties of smaller large cardinals, in particular those still consistent with V = L.
APA, Harvard, Vancouver, ISO, and other styles
23

SAMANTA, SAURAV. "NONCOMMUTATIVITY FROM EMBEDDING TECHNIQUES." Modern Physics Letters A 21, no. 08 (March 14, 2006): 675–89. http://dx.doi.org/10.1142/s0217732306019037.

Full text
Abstract:
We apply the embedding method of Batalin–Tyutin for revealing noncommutative structures in the generalized Landau problem. Different types of noncommutativity follow from different gauge choices. This establishes a duality among the distinct algebras. An alternative approach is discussed which yields equivalent results as the embedding method. We also discuss the consequences in the Landau problem for a non-constant magnetic field.
APA, Harvard, Vancouver, ISO, and other styles
24

Yuan, Ye, Zhi Qiang Huang, and Ze Min Cai. "Classification of Multi-Types of EEG Time Series Based on Embedding Dimension Characteristic Parameter." Key Engineering Materials 474-476 (April 2011): 1987–92. http://dx.doi.org/10.4028/www.scientific.net/kem.474-476.1987.

Full text
Abstract:
We have studied the detection of epileptic seizure by EEG signals based on embedding dimension as the input characteristic parameter of artificial neural networks has been studied in the research before. The results of the experiments showed that the overall accuracy as high as 100% can be achieved for distinguishing normal and epileptic EEG time series. In this paper, classification of multi-types of EEG time series based on embedding dimension as input characteristic parameter of artificial neural network will be studied, and the probabilistic neural network (PNN) will be also employed as the classifier for comparing the results with those obtained before. Cao’s method is also applied for computing the embedding dimension of normal and epileptic EEG time series. The results show that different types of EEG time series can be classified using the embedding dimension of EEG time series as characteristic parameter when the number of feature points exceed some value, however, the accuracy were not satisfied up to now, some work need to be done to improve the classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Zeyu, Jyun-Yu Jiang, Yizhou Sun, and Wei Wang. "Personalized Question Routing via Heterogeneous Network Embedding." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 192–99. http://dx.doi.org/10.1609/aaai.v33i01.3301192.

Full text
Abstract:
Question Routing (QR) on Community-based Question Answering (CQA) websites aims at recommending answerers that have high probabilities of providing the “accepted answers” to new questions. The existing question routing algorithms simply predict the ranking of users based on query content. As a consequence, the question raiser information is ignored. On the other hand, they lack learnable scoring functions to explicitly compute ranking scores.To tackle these challenges, we propose NeRank that (1) jointly learns representations of question content, question raiser, and question answerers by a heterogeneous information network embedding algorithm and a long short-term memory (LSTM) model. The embeddings of the three types of entities are unified in the same latent space, and (2) conducts question routing for personalized queries, i.e., queries with two entities (question content, question raiser), by a convolutional scoring function taking the learned embeddings of all three types of entities as input. Using the scores, NeRank routes new questions to high-ranking answerers that are skillfulness in the question domain and have similar backgrounds to the question raiser.Experimental results show that NeRank significantly outperforms competitive baseline question routing models that ignore the raiser information in three ranking metrics. In addition, NeRank is convergeable in several thousand iterations and insensitive to parameter changes, which prove its effectiveness, scalability, and robustness.
APA, Harvard, Vancouver, ISO, and other styles
26

Klaeren, Herbert. "Embedding functionally described abstract data types into MODULA-2 programs." Microprocessors and Microsystems 14, no. 3 (April 1990): 161–66. http://dx.doi.org/10.1016/0141-9331(90)90067-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Lim, Kien H., and Ashley D. Wilson. "Flipped Learning: Embedding Questions in Videos." Mathematics Teaching in the Middle School 23, no. 7 (May 2018): 378–85. http://dx.doi.org/10.5951/mathteacmiddscho.23.7.0378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Parikh, Soham, Anahita Davoudi, Shun Yu, Carolina Giraldo, Emily Schriver, and Danielle Mowery. "Lexicon Development for COVID-19-related Concepts Using Open-source Word Embedding Sources: An Intrinsic and Extrinsic Evaluation." JMIR Medical Informatics 9, no. 2 (February 22, 2021): e21679. http://dx.doi.org/10.2196/21679.

Full text
Abstract:
Background Scientists are developing new computational methods and prediction models to better clinically understand COVID-19 prevalence, treatment efficacy, and patient outcomes. These efforts could be improved by leveraging documented COVID-19–related symptoms, findings, and disorders from clinical text sources in an electronic health record. Word embeddings can identify terms related to these clinical concepts from both the biomedical and nonbiomedical domains, and are being shared with the open-source community at large. However, it’s unclear how useful openly available word embeddings are for developing lexicons for COVID-19–related concepts. Objective Given an initial lexicon of COVID-19–related terms, this study aims to characterize the returned terms by similarity across various open-source word embeddings and determine common semantic and syntactic patterns between the COVID-19 queried terms and returned terms specific to the word embedding source. Methods We compared seven openly available word embedding sources. Using a series of COVID-19–related terms for associated symptoms, findings, and disorders, we conducted an interannotator agreement study to determine how accurately the most similar returned terms could be classified according to semantic types by three annotators. We conducted a qualitative study of COVID-19 queried terms and their returned terms to detect informative patterns for constructing lexicons. We demonstrated the utility of applying such learned synonyms to discharge summaries by reporting the proportion of patients identified by concept among three patient cohorts: pneumonia (n=6410), acute respiratory distress syndrome (n=8647), and COVID-19 (n=2397). Results We observed high pairwise interannotator agreement (Cohen kappa) for symptoms (0.86-0.99), findings (0.93-0.99), and disorders (0.93-0.99). Word embedding sources generated based on characters tend to return more synonyms (mean count of 7.2 synonyms) compared to token-based embedding sources (mean counts range from 2.0 to 3.4). Word embedding sources queried using a qualifier term (eg, dry cough or muscle pain) more often returned qualifiers of the similar semantic type (eg, “dry” returns consistency qualifiers like “wet” and “runny”) compared to a single term (eg, cough or pain) queries. A higher proportion of patients had documented fever (0.61-0.84), cough (0.41-0.55), shortness of breath (0.40-0.59), and hypoxia (0.51-0.56) retrieved than other clinical features. Terms for dry cough returned a higher proportion of patients with COVID-19 (0.07) than the pneumonia (0.05) and acute respiratory distress syndrome (0.03) populations. Conclusions Word embeddings are valuable technology for learning related terms, including synonyms. When leveraging openly available word embedding sources, choices made for the construction of the word embeddings can significantly influence the words learned.
APA, Harvard, Vancouver, ISO, and other styles
29

Friedman, Sy-David, and Catherine Thompson. "Internal consistency for embedding complexity." Journal of Symbolic Logic 73, no. 3 (September 2008): 831–44. http://dx.doi.org/10.2178/jsl/1230396750.

Full text
Abstract:
AbstractIn a previous paper with M. Džamonja, class forcings were given which fixed the complexity (a universality covering number) for certain types of structures of size λ together with the value of 2λ for every regular λ. As part of a programme for examining when such global results can be true in an inner model, we build generics for these class forcings.
APA, Harvard, Vancouver, ISO, and other styles
30

Zhuo, Wei, Qianyi Zhan, Yuan Liu, Zhenping Xie, and Jing Lu. "Context Attention Heterogeneous Network Embedding." Computational Intelligence and Neuroscience 2019 (August 21, 2019): 1–15. http://dx.doi.org/10.1155/2019/8106073.

Full text
Abstract:
Network embedding (NE), which maps nodes into a low-dimensional latent Euclidean space to represent effective features of each node in the network, has obtained considerable attention in recent years. Many popular NE methods, such as DeepWalk, Node2vec, and LINE, are capable of handling homogeneous networks. However, nodes are always fully accompanied by heterogeneous information (e.g., text descriptions, node properties, and hashtags) in the real-world network, which remains a great challenge to jointly project the topological structure and different types of information into the fixed-dimensional embedding space due to heterogeneity. Besides, in the unweighted network, how to quantify the strength of edges (tightness of connections between nodes) accurately is also a difficulty faced by existing methods. To bridge the gap, in this paper, we propose CAHNE (context attention heterogeneous network embedding), a novel network embedding method, to accurately determine the learning result. Specifically, we propose the concept of node importance to measure the strength of edges, which can better preserve the context relations of a node in unweighted networks. Moreover, text information is a widely ubiquitous feature in real-world networks, e.g., online social networks and citation networks. On account of the sophisticated interactions between the network structure and text features of nodes, CAHNE learns context embeddings for nodes by introducing the context node sequence, and the attention mechanism is also integrated into our model to better reflect the impact of context nodes on the current node. To corroborate the efficacy of CAHNE, we apply our method and various baseline methods on several real-world datasets. The experimental results show that CAHNE achieves higher quality compared to a number of state-of-the-art network embedding methods on the tasks of network reconstruction, link prediction, node classification, and visualization.
APA, Harvard, Vancouver, ISO, and other styles
31

Dahnke, Christoph, Annika Foydl, Eilina Levin, Matthias Haase, and A. Erman Tekkaya. "Process Window for the Embedding of Eccentric Steel-Reinforcing Elements in the Discontinuous Composite Extrusion Process." Applied Mechanics and Materials 794 (October 2015): 182–89. http://dx.doi.org/10.4028/www.scientific.net/amm.794.182.

Full text
Abstract:
The process of discontinuous composite extrusion offers the possibility for the centric and eccentric embedding of steel reinforcing elements into an aluminium profile. Thereby, the process is influenced by various parameters, which can lead to certain types of processes failures. Three characteristic types of process failures – cavities, local plastic deformation and rotation – have been identified. According to these influencing factors and based on the process window for the discontinuous centric embedding of cylindrical reinforcing elements in rods, a process window for the eccentric embedding of steel reinforcing elements was developed.
APA, Harvard, Vancouver, ISO, and other styles
32

Gu, Tianlong, Haohong Liang, Chenzhong Bin, and Liang Chang. "Combining user-end and item-end knowledge graph learning for personalized recommendation." Journal of Intelligent & Fuzzy Systems 40, no. 5 (April 22, 2021): 9213–25. http://dx.doi.org/10.3233/jifs-201635.

Full text
Abstract:
How to accurately model user preferences based on historical user behaviour and auxiliary information is of great importance in personalized recommendation tasks. Among all types of auxiliary information, knowledge graphs (KGs) are an emerging type of auxiliary information with nodes and edges that contain rich structural information and semantic information. Many studies prove that incorporating KG into personalized recommendation tasks can effectively improve the performance, rationality and interpretability of recommendations. However, existing methods either explore the independent meta-paths for user-item pairs in KGs or use a graph convolution network on all KGs to obtain embeddings for users and items separately. Although both types of methods have respective effects, the former cannot fully capture the structural information of user-item pairs in KGs, while the latter ignores the mutual effect between the target user and item during the embedding learning process. To alleviate the shortcomings of these methods, we design a graph convolution-based recommendation model called Combining User-end and Item-end Knowledge Graph Learning (CUIKG), which aims to capture the relevance between users’ personalized preferences and items by jointly mining the associated attribute information in their respective KG. Specifically, we describe user embedding from a user KG and then introduce user embedding, which contains the user profile into the item KG, to describe item embedding with the method of Graph Convolution Network. Finally, we predict user preference probability for a given item via multilayer perception. CUIKG describes the connection between user-end KG and item-end KG, and mines the structural and semantic information present in KG. Experimental results with two real-world datasets demonstrate the superiority of the proposed method over existing methods.
APA, Harvard, Vancouver, ISO, and other styles
33

Jang, Youngjin, and Harksoo Kim. "Reliable Classification of FAQs with Spelling Errors Using an Encoder-Decoder Neural Network in Korean." Applied Sciences 9, no. 22 (November 7, 2019): 4758. http://dx.doi.org/10.3390/app9224758.

Full text
Abstract:
To resolve lexical disagreement problems between queries and frequently asked questions (FAQs), we propose a reliable sentence classification model based on an encoder-decoder neural network. The proposed model uses three types of word embeddings; fixed word embeddings for representing domain-independent meanings of words, fined-tuned word embeddings for representing domain-specific meanings of words, and character-level word embeddings for bridging lexical gaps caused by spelling errors. It also uses class embeddings to represent domain knowledge associated with each category. In the experiments with an FAQ dataset about online banking, the proposed embedding methods contributed to an improved performance of the sentence classification. In addition, the proposed model showed better performance (with an accuracy of 0.810 in the classification of 411 categories) than that of the comparison model.
APA, Harvard, Vancouver, ISO, and other styles
34

Guo, Lei, Haoran Jiang, Xiyu Liu, and Changming Xing. "Network Embedding-Aware Point-of-Interest Recommendation in Location-Based Social Networks." Complexity 2019 (November 4, 2019): 1–18. http://dx.doi.org/10.1155/2019/3574194.

Full text
Abstract:
As one of the important techniques to explore unknown places for users, the methods that are proposed for point-of-interest (POI) recommendation have been widely studied in recent years. Compared with traditional recommendation problems, POI recommendations are suffering from more challenges, such as the cold-start and one-class collaborative filtering problems. Many existing studies have focused on how to overcome these challenges by exploiting different types of contexts (e.g., social and geographical information). However, most of these methods only model these contexts as regularization terms, and the deep information hidden in the network structure has not been fully exploited. On the other hand, neural network-based embedding methods have shown its power in many recommendation tasks with its ability to extract high-level representations from raw data. According to the above observations, to well utilize the network information, a neural network-based embedding method (node2vec) is first exploited to learn the user and POI representations from a social network and a predefined location network, respectively. To deal with the implicit feedback, a pair-wise ranking-based method is then introduced. Finally, by regarding the pretrained network representations as the priors of the latent feature factors, an embedding-based POI recommendation method is proposed. As this method consists of an embedding model and a collaborative filtering model, when the training data are absent, the predictions will mainly be generated by the extracted embeddings. In other cases, this method will learn the user and POI factors from these two components. Experiments on two real-world datasets demonstrate the importance of the network embeddings and the effectiveness of our proposed method.
APA, Harvard, Vancouver, ISO, and other styles
35

Huang, Xiao, Qingquan Song, Fan Yang, and Xia Hu. "Large-Scale Heterogeneous Feature Embedding." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3878–85. http://dx.doi.org/10.1609/aaai.v33i01.33013878.

Full text
Abstract:
Feature embedding aims to learn a low-dimensional vector representation for each instance to preserve the information in its features. These representations can benefit various offthe-shelf learning algorithms. While embedding models for a single type of features have been well-studied, real-world instances often contain multiple types of correlated features or even information within a different modality such as networks. Existing studies such as multiview learning show that it is promising to learn unified vector representations from all sources. However, high computational costs of incorporating heterogeneous information limit the applications of existing algorithms. The number of instances and dimensions of features in practice are often large. To bridge the gap, we propose a scalable framework FeatWalk, which can model and incorporate instance similarities in terms of different types of features into a unified embedding representation. To enable the scalability, FeatWalk does not directly calculate any similarity measure, but provides an alternative way to simulate the similarity-based random walks among instances to extract the local instance proximity and preserve it in a set of instance index sequences. These sequences are homogeneous with each other. A scalable word embedding algorithm is applied to them to learn a joint embedding representation of instances. Experiments on four real-world datasets demonstrate the efficiency and effectiveness of FeatWalk.
APA, Harvard, Vancouver, ISO, and other styles
36

Doval, Yerai, Jesús Vilares, and Carlos Gómez-Rodríguez. "Towards Robust Word Embeddings for Noisy Texts." Applied Sciences 10, no. 19 (October 1, 2020): 6893. http://dx.doi.org/10.3390/app10196893.

Full text
Abstract:
Research on word embeddings has mainly focused on improving their performance on standard corpora, disregarding the difficulties posed by noisy texts in the form of tweets and other types of non-standard writing from social media. In this work, we propose a simple extension to the skipgram model in which we introduce the concept of bridge-words, which are artificial words added to the model to strengthen the similarity between standard words and their noisy variants. Our new embeddings outperform baseline models on noisy texts on a wide range of evaluation tasks, both intrinsic and extrinsic, while retaining a good performance on standard texts. To the best of our knowledge, this is the first explicit approach at dealing with these types of noisy texts at the word embedding level that goes beyond the support for out-of-vocabulary words.
APA, Harvard, Vancouver, ISO, and other styles
37

Bergh, N. Van den. "Vacuum solutions of embedding class 2: Petrov types D and N." Classical and Quantum Gravity 13, no. 10 (October 1, 1996): 2839–50. http://dx.doi.org/10.1088/0264-9381/13/10/019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Zhang, Xiaolin, Lei Chen, Zi-Han Guo, and Haiyan Liang. "Identification of Human Membrane Protein Types by Incorporating Network Embedding Methods." IEEE Access 7 (2019): 140794–805. http://dx.doi.org/10.1109/access.2019.2944177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Chlipala, Adam. "Skipping the binder bureaucracy with mixed embeddings in a semantics course (functional pearl)." Proceedings of the ACM on Programming Languages 5, ICFP (August 22, 2021): 1–28. http://dx.doi.org/10.1145/3473599.

Full text
Abstract:
Rigorous reasoning about programs calls for some amount of bureaucracy in managing details like variable binding, but, in guiding students through big ideas in semantics, we might hope to minimize the overhead. We describe our experiment introducing a range of such ideas, using the Coq proof assistant, without any explicit representation of variables, instead using a higher-order syntax encoding that we dub "mixed embedding": it is neither the fully explicit syntax of deep embeddings nor the syntax-free programming of shallow embeddings. Marquee examples include different takes on concurrency reasoning, including in the traditions of model checking (partial-order reduction), program logics (concurrent separation logic), and type checking (session types) -- all presented without any side conditions on variables.
APA, Harvard, Vancouver, ISO, and other styles
40

GALKA, ANDREAS, and GERD PFISTER. "DYNAMICAL CORRELATIONS ON RECONSTRUCTED INVARIANT DENSITIES AND THEIR EFFECT ON CORRELATION DIMENSION ESTIMATION." International Journal of Bifurcation and Chaos 13, no. 03 (March 2003): 723–32. http://dx.doi.org/10.1142/s0218127403006881.

Full text
Abstract:
We investigate the structure of dynamical correlations on reconstructed attractors which were obtained by time-delay embedding of periodic, quasi-periodic and chaotic time series. Within the specific sampling of the invariant density by a finite number of vectors which results from embedding, we identify two separate levels of sampling, corresponding to two different types of dynamical correlations, each of which produces characteristic artifacts in correlation dimension estimation: the well-known trajectory bias and a characteristic oscillation due to periodic sampling. For the second artifact we propose random sampling as a new correction method which is shown to provide improved sampling and to reduce dynamical correlations more efficiently than it has been possible by the standard Theiler correction. For accurate numerical analysis of correlation dimension in a bootstrap framework both corrections should be combined. For tori and the Lorenz attractor we also show how to construct time-delay embeddings which are completely free of any dynamical correlations.
APA, Harvard, Vancouver, ISO, and other styles
41

Shang, Chao, Yun Tang, Jing Huang, Jinbo Bi, Xiaodong He, and Bowen Zhou. "End-to-End Structure-Aware Convolutional Networks for Knowledge Base Completion." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3060–67. http://dx.doi.org/10.1609/aaai.v33i01.33013060.

Full text
Abstract:
Knowledge graph embedding has been an active research topic for knowledge base completion, with progressive improvement from the initial TransE, TransH, DistMult et al to the current state-of-the-art ConvE. ConvE uses 2D convolution over embeddings and multiple layers of nonlinear features to model knowledge graphs. The model can be efficiently trained and scalable to large knowledge graphs. However, there is no structure enforcement in the embedding space of ConvE. The recent graph convolutional network (GCN) provides another way of learning graph node embedding by successfully utilizing graph connectivity structure. In this work, we propose a novel end-to-end StructureAware Convolutional Network (SACN) that takes the benefit of GCN and ConvE together. SACN consists of an encoder of a weighted graph convolutional network (WGCN), and a decoder of a convolutional network called Conv-TransE. WGCN utilizes knowledge graph node structure, node attributes and edge relation types. It has learnable weights that adapt the amount of information from neighbors used in local aggregation, leading to more accurate embeddings of graph nodes. Node attributes in the graph are represented as additional nodes in the WGCN. The decoder Conv-TransE enables the state-of-the-art ConvE to be translational between entities and relations while keeps the same link prediction performance as ConvE. We demonstrate the effectiveness of the proposed SACN on standard FB15k-237 and WN18RR datasets, and it gives about 10% relative improvement over the state-of-theart ConvE in terms of HITS@1, HITS@3 and HITS@10.
APA, Harvard, Vancouver, ISO, and other styles
42

Kubricht, James R., Alberto Santamaria-Pang, Chinmaya Devaraj, Aritra Chowdhury, and Peter Tu. "Emergent Languages from Pretrained Embeddings Characterize Latent Concepts in Dynamic Imagery." International Journal of Semantic Computing 14, no. 03 (September 2020): 357–73. http://dx.doi.org/10.1142/s1793351x20400140.

Full text
Abstract:
Recent unsupervised learning approaches have explored the feasibility of semantic analysis and interpretation of imagery using Emergent Language (EL) models. As EL requires some form of numerical embedding as input, it remains unclear which type is required in order for the EL to properly capture key semantic concepts associated with a given domain. In this paper, we compare unsupervised and supervised approaches for generating embeddings across two experiments. In Experiment 1, data are produced using a single-agent simulator. In each episode, a goal-driven agent attempts to accomplish a number of tasks in a synthetic cityscape environment which includes houses, banks, theaters and restaurants. In Experiment 2, a comparatively smaller dataset is produced where one or more objects demonstrate various types of physical motion in a 3D simulator environment. We investigate whether EL models generated from embeddings of raw pixel data produce expressions that capture key latent concepts (i.e. an agent’s motivations or physical motion types) in each environment. Our initial experiments show that the supervised learning approaches yield embeddings and EL descriptions that capture meaningful concepts from raw pixel inputs. Alternatively, embeddings from an unsupervised learning approach result in greater ambiguity with respect to latent concepts.
APA, Harvard, Vancouver, ISO, and other styles
43

Zhang, H., J. J. Zhou, and R. Li. "Enhanced Unsupervised Graph Embedding via Hierarchical Graph Convolution Network." Mathematical Problems in Engineering 2020 (July 26, 2020): 1–9. http://dx.doi.org/10.1155/2020/5702519.

Full text
Abstract:
Graph embedding aims to learn the low-dimensional representation of nodes in the network, which has been paid more and more attention in many graph-based tasks recently. Graph Convolution Network (GCN) is a typical deep semisupervised graph embedding model, which can acquire node representation from the complex network. However, GCN usually needs to use a lot of labeled data and additional expressive features in the graph embedding learning process, so the model cannot be effectively applied to undirected graphs with only network structure information. In this paper, we propose a novel unsupervised graph embedding method via hierarchical graph convolution network (HGCN). Firstly, HGCN builds the initial node embedding and pseudo-labels for the undirected graphs, and then further uses GCNs to learn the node embedding and update labels, finally combines HGCN output representation with the initial embedding to get the graph embedding. Furthermore, we improve the model to match the different undirected networks according to the number of network node label types. Comprehensive experiments demonstrate that our proposed HGCN and HGCN∗ can significantly enhance the performance of the node classification task.
APA, Harvard, Vancouver, ISO, and other styles
44

Alfattni, Ghada, Maksim Belousov, Niels Peek, and Goran Nenadic. "Extracting Drug Names and Associated Attributes From Discharge Summaries: Text Mining Study." JMIR Medical Informatics 9, no. 5 (May 5, 2021): e24678. http://dx.doi.org/10.2196/24678.

Full text
Abstract:
Background Drug prescriptions are often recorded in free-text clinical narratives; making this information available in a structured form is important to support many health-related tasks. Although several natural language processing (NLP) methods have been proposed to extract such information, many challenges remain. Objective This study evaluates the feasibility of using NLP and deep learning approaches for extracting and linking drug names and associated attributes identified in clinical free-text notes and presents an extensive error analysis of different methods. This study initiated with the participation in the 2018 National NLP Clinical Challenges (n2c2) shared task on adverse drug events and medication extraction. Methods The proposed system (DrugEx) consists of a named entity recognizer (NER) to identify drugs and associated attributes and a relation extraction (RE) method to identify the relations between them. For NER, we explored deep learning-based approaches (ie, bidirectional long-short term memory with conditional random fields [BiLSTM-CRFs]) with various embeddings (ie, word embedding, character embedding [CE], and semantic-feature embedding) to investigate how different embeddings influence the performance. A rule-based method was implemented for RE and compared with a context-aware long-short term memory (LSTM) model. The methods were trained and evaluated using the 2018 n2c2 shared task data. Results The experiments showed that the best model (BiLSTM-CRFs with pretrained word embeddings [PWE] and CE) achieved lenient micro F-scores of 0.921 for NER, 0.927 for RE, and 0.855 for the end-to-end system. NER, which relies on the pretrained word and semantic embeddings, performed better on most individual entity types, but NER with PWE and CE had the highest classification efficiency among the proposed approaches. Extracting relations using the rule-based method achieved higher accuracy than the context-aware LSTM for most relations. Interestingly, the LSTM model performed notably better in the reason-drug relations, the most challenging relation type. Conclusions The proposed end-to-end system achieved encouraging results and demonstrated the feasibility of using deep learning methods to extract medication information from free-text data.
APA, Harvard, Vancouver, ISO, and other styles
45

Karlsson, Fred. "Multiple final embedding of clauses." International Journal of Corpus Linguistics 15, no. 1 (March 22, 2010): 88–105. http://dx.doi.org/10.1075/ijcl.15.1.04kar.

Full text
Abstract:
There are no grammatical limits on multiple final embedding of clauses. But converging corpus data from English, Finnish, German and Swedish show that multiple final embedding is avoided at levels deeper than three levels from the main clause in syntactically simple varieties, and at levels deeper than five levels in complex varieties. The frequency of every successive level of final embedding decreases by a factor of seven down to levels 4–5. Only relative clauses allow free self-embedding, within the limits just mentioned. These restrictions are regularities of language use, stylistic preferences related to the properties of various types of discourse. Ultimately they are explained by cognitive and other properties of the language processing mechanisms. The frequencies of final embedding depths in modern languages such as English and Finnish is not accidental. Ancient Greek had reached this profile by 300 BC, suggesting cross-linguistic generality of the preferences.
APA, Harvard, Vancouver, ISO, and other styles
46

Byun, Sung-Woo, and Seok-Pil Lee. "Design of a Multi-Condition Emotional Speech Synthesizer." Applied Sciences 11, no. 3 (January 26, 2021): 1144. http://dx.doi.org/10.3390/app11031144.

Full text
Abstract:
Recently, researchers have developed text-to-speech models based on deep learning, which have produced results superior to those of previous approaches. However, because those systems only mimic the generic speaking style of reference audio, it is difficult to assign user-defined emotional types to synthesized speech. This paper proposes an emotional speech synthesizer constructed by embedding not only speaking styles but also emotional styles. We extend speaker embedding to multi-condition embedding by adding emotional embedding in Tacotron, so that the synthesizer can generate emotional speech. An evaluation of the results showed the superiority of the proposed model to a previous model, in terms of emotional expressiveness.
APA, Harvard, Vancouver, ISO, and other styles
47

Dong, Bin, Songlei Jian, and Ke Zuo. "CDE++: Learning Categorical Data Embedding by Enhancing Heterogeneous Feature Value Coupling Relationships." Entropy 22, no. 4 (March 29, 2020): 391. http://dx.doi.org/10.3390/e22040391.

Full text
Abstract:
Categorical data are ubiquitous in machine learning tasks, and the representation of categorical data plays an important role in the learning performance. The heterogeneous coupling relationships between features and feature values reflect the characteristics of the real-world categorical data which need to be captured in the representations. The paper proposes an enhanced categorical data embedding method, i.e., CDE++, which captures the heterogeneous feature value coupling relationships into the representations. Based on information theory and the hierarchical couplings defined in our previous work CDE (Categorical Data Embedding by learning hierarchical value coupling), CDE++ adopts mutual information and margin entropy to capture feature couplings and designs a hybrid clustering strategy to capture multiple types of feature value clusters. Moreover, Autoencoder is used to learn non-linear couplings between features and value clusters. The categorical data embeddings generated by CDE++ are low-dimensional numerical vectors which are directly applied to clustering and classification and achieve the best performance comparing with other categorical representation learning methods. Parameter sensitivity and scalability tests are also conducted to demonstrate the superiority of CDE++.
APA, Harvard, Vancouver, ISO, and other styles
48

Rasiah, Rajah. "Export Orientation and Technological Intensities in Auto Parts Firms in East and Southeast Asia: Does Ownership Matter?" Asian Economic Papers 6, no. 2 (May 2007): 55–76. http://dx.doi.org/10.1162/asep.2007.6.2.55.

Full text
Abstract:
This paper examines the statistical relationships involving export intensity, embedding environment, and three types of technological intensity (human resource intensity, process technology intensity, and research and development intensity). The embedding environment measures the degree of infrastructure support for innovation. The sample consists of auto parts firms in China, Indonesia, Korea, Malaysia, The Philippines, Taiwan, and Thailand. For the local sample, export intensity and the embedding environment are positively significant for the three technological intensities. For the foreign sample, export intensity and embedding environment are positively significant only for the research and development intensity. The strong positive relationship between foreign ownership and export intensity shows that foreign firms enjoy greater access in export markets.
APA, Harvard, Vancouver, ISO, and other styles
49

Klausner, L. D., and T. Weinert. "THE POLARISED PARTITION RELATION FOR ORDER TYPES." Quarterly Journal of Mathematics 71, no. 3 (June 4, 2020): 823–42. http://dx.doi.org/10.1093/qmathj/haaa003.

Full text
Abstract:
Abstract We analyse partitions of products with two ordered factors in two classes where both factors are countable or well-ordered and at least one of them is countable. This relates the partition properties of these products to cardinal characteristics of the continuum. We build on work by Erd̋s, Garti, Jones, Orr, Rado, Shelah and Szemerédi. In particular, we show that a theorem of Jones extends from the natural numbers to the rational ones, but consistently extends only to three further equimorphism classes of countable orderings. This is made possible by applying a 13-year-old theorem of Orr about embedding a given order into a sum of finite orders indexed over the given order.
APA, Harvard, Vancouver, ISO, and other styles
50

Wu, Wei, Guangmin Hu, and Fucai Yu. "An Unsupervised Learning Method for Attributed Network Based on Non-Euclidean Geometry." Symmetry 13, no. 5 (May 19, 2021): 905. http://dx.doi.org/10.3390/sym13050905.

Full text
Abstract:
Many real-world networks can be modeled as attributed networks, where nodes are affiliated with attributes. When we implement attributed network embedding, we need to face two types of heterogeneous information, namely, structural information and attribute information. The structural information of undirected networks is usually expressed as a symmetric adjacency matrix. Network embedding learning is to utilize the above information to learn the vector representations of nodes in the network. How to integrate these two types of heterogeneous information to improve the performance of network embedding is a challenge. Most of the current approaches embed the networks in Euclidean spaces, but the networks themselves are non-Euclidean. As a consequence, the geometric differences between the embedded space and the underlying space of the network will affect the performance of the network embedding. According to the non-Euclidean geometry of networks, this paper proposes an attributed network embedding framework based on hyperbolic geometry and the Ricci curvature, namely, RHAE. Our method consists of two modules: (1) the first module is an autoencoder module in which each layer is provided with a network information aggregation layer based on the Ricci curvature and an embedding layer based on hyperbolic geometry; (2) the second module is a skip-gram module in which the random walk is based on the Ricci curvature. These two modules are based on non-Euclidean geometry, but they fuse the topology information and attribute information in the network from different angles. Experimental results on some benchmark datasets show that our approach outperforms the baselines.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography