To see the other types of publications on this topic, follow the link: Structure representations.

Journal articles on the topic 'Structure representations'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Structure representations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Singer, W. "Consciousness and the structure of neuronal representations." Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 353, no. 1377 (November 29, 1998): 1829–40. http://dx.doi.org/10.1098/rstb.1998.0335.

Full text
Abstract:
The hypothesis is defended that brains expressing phenomenal awareness are capable of generating metarepresentations of their cognitive processes, these metarepresentations resulting from an iteration of self–similar cortical operations. Search for the neuronal substrate of awareness therefore converges with the search for the nature of neuronal representations. It is proposed that evolved brains use two complementary representational strategies. One consists of the generation of neurons responding selectively to a particular constellation of features and is based on selective recombination of inputs in hierarchically structured feedforward architectures. The other relies on the dynamic association of feature–specific cells into functionally coherent cell assemblies that, as a whole, represent the constellation of features defining a particular perceptual object. Arguments are presented that favour the notion that the metarepresentations supporting awareness are established in accordance with the second strategy. Experimental data are reviewed that are compatible with the hypothesis that evolved brains use assembly codes for the representation of contents and that these assemblies become organized through transient synchronization of the discharges of associated neurons. It is argued that central states favouring the formation of assembly–based representations are similar to those favouring awareness.
APA, Harvard, Vancouver, ISO, and other styles
2

Viswanath, Shruthi, and Andrej Sali. "Optimizing model representation for integrative structure determination of macromolecular assemblies." Proceedings of the National Academy of Sciences 116, no. 2 (December 26, 2018): 540–45. http://dx.doi.org/10.1073/pnas.1814649116.

Full text
Abstract:
Integrative structure determination of macromolecular assemblies requires specifying the representation of the modeled structure, a scoring function for ranking alternative models based on diverse types of data, and a sampling method for generating these models. Structures are often represented at atomic resolution, although ad hoc simplified representations based on generic guidelines and/or trial and error are also used. In contrast, we introduce here the concept of optimizing representation. To illustrate this concept, the optimal representation is selected from a set of candidate representations based on an objective criterion that depends on varying amounts of information available for different parts of the structure. Specifically, an optimal representation is defined as the highest-resolution representation for which sampling is exhaustive at a precision commensurate with the precision of the representation. Thus, the method does not require an input structure and is applicable to any input information. We consider a space of representations in which a representation is a set of nonoverlapping, variable-length segments (i.e., coarse-grained beads) for each component protein sequence. We also implement a method for efficiently finding an optimal representation in our open-source Integrative Modeling Platform (IMP) software (https://integrativemodeling.org/). The approach is illustrated by application to three complexes of two subunits and a large assembly of 10 subunits. The optimized representation facilitates exhaustive sampling and thus can produce a more accurate model and a more accurate estimate of its uncertainty for larger structures than were possible previously.
APA, Harvard, Vancouver, ISO, and other styles
3

Garnham, Alan. "Opinion Piece: How People Structure Representations of Discourse." Dialogue & Discourse 12, no. 1 (February 25, 2021): 1–20. http://dx.doi.org/10.5210/dad.2021.101.

Full text
Abstract:
Mental models or situation models include representations of people, but much of the literature about such models focuses on the representation of eventualities (events, states, and processes) or (small-scale) situations. In the well-known event-indexing model of Zwaan, Langston, and Graesser (1995), for example, protagonists are just one of five dimensions on which situation models are indexed. They are not given any additional special status. Consideration of longer narratives, and the ways in which readers or listeners relate to them, suggest that people have a more central status in the way we think about texts, and hence in discourse representations, Indeed, such considerations suggest that discourse representations are organised around (the representations of) central characters. The paper develops the idea of the centrality of main characters in representations of longer texts, by considering, among other things, the way information is presented in novels, with L’Éducation Sentimentale by Gustav Flaubert as a case study. Conclusions are also drawn about the role of representations of people in the representation of other types of text.
APA, Harvard, Vancouver, ISO, and other styles
4

Ibarra, Andoni, and Thomas Mormann. "Una teoría combinatoria de las representaciones científicas." Crítica (México D. F. En línea) 32, no. 95 (January 7, 2000): 3–46. http://dx.doi.org/10.22201/iifs.18704905e.2000.874.

Full text
Abstract:
The aim of this paper is to introduce a new concept of scientific representation into philosophy of science. The new concept -to be called homological or functorial representation- is a genuine generalization of the received notion of representation as a structure preserving map as it is used, for example, in the representational theory of measurement. It may be traced back, at least implicitly, to the works of Hertz and Duhem. A modern elaboration may be found in the foundational discipline of mathematical category theory. In contrast to the familiar concepts of representations, functorial representations do not depend on any notion of similarity, neither structural nor objectual one. Rather, functorial representation establish correlations between the structures of the representing and the represented domains. Thus, they may be said to form a class of quite "non-isomorphic" representations. Nevertheless, and this is the central claim of this paper, they are the most common type of representations used in science. In our paper we give some examples from mathematics and empirical science. One of the most interesting features of the new concept is that it leads in a natural way to a combinatorial theory of scientific representations, i.e. homological or functorial representations do not live in insulation, rather, they may be combined and connected in various ways thereby forming a net of interrelated representations. One of the most important tasks of a theory of scientific representations is to describe this realm of combinatorial possibilities in detail. Some first tentative steps towards this endeavour are done in our paper.
APA, Harvard, Vancouver, ISO, and other styles
5

Ibort, Alberto, and Miguel Rodríguez. "On the Structure of Finite Groupoids and Their Representations." Symmetry 11, no. 3 (March 20, 2019): 414. http://dx.doi.org/10.3390/sym11030414.

Full text
Abstract:
In this paper, both the structure and the theory of representations of finite groupoids are discussed. A finite connected groupoid turns out to be an extension of the groupoids of pairs of its set of units by its canonical totally disconnected isotropy subgroupoid. An extension of Maschke’s theorem for groups is proved showing that the algebra of a finite groupoid is semisimple and all finite-dimensional linear representations of finite groupoids are completely reducible. The theory of characters for finite-dimensional representations of finite groupoids is developed and it is shown that irreducible representations of the groupoid are in one-to-one correspondence with irreducible representation of its isotropy groups, with an extension of Burnside’s theorem describing the decomposition of the regular representation of a finite groupoid. Some simple examples illustrating these results are exhibited with emphasis on the groupoids interpretation of Schwinger’s description of quantum mechanical systems.
APA, Harvard, Vancouver, ISO, and other styles
6

Schwartz, Geoffrey. "Refining representations for L2 phonology." Second Language Research 36, no. 4 (June 9, 2019): 691–707. http://dx.doi.org/10.1177/0267658319852383.

Full text
Abstract:
This article discusses the implications of phonological representation for the study of L2 speech acquisition. It is argued, on the basis of empirical findings from diverse phenomena in L2 phonology, that refined representations in which ‘segments’ have internal prosodic structure offer a more insightful view of cross-linguistic phonetic interaction than traditional phonological models. These refinements may be implemented in the Onset Prominence representational environment, in which diverse structural parses affect sub-segmental phonetic properties, transitions between segments, and the formation of prosodic boundaries.
APA, Harvard, Vancouver, ISO, and other styles
7

Carlson, Thomas A., J. Brendan Ritchie, Nikolaus Kriegeskorte, Samir Durvasula, and Junsheng Ma. "Reaction Time for Object Categorization Is Predicted by Representational Distance." Journal of Cognitive Neuroscience 26, no. 1 (January 2014): 132–42. http://dx.doi.org/10.1162/jocn_a_00476.

Full text
Abstract:
How does the brain translate an internal representation of an object into a decision about the object's category? Recent studies have uncovered the structure of object representations in inferior temporal cortex (IT) using multivariate pattern analysis methods. These studies have shown that representations of individual object exemplars in IT occupy distinct locations in a high-dimensional activation space, with object exemplar representations clustering into distinguishable regions based on category (e.g., animate vs. inanimate objects). In this study, we hypothesized that a representational boundary between category representations in this activation space also constitutes a decision boundary for categorization. We show that behavioral RTs for categorizing objects are well described by our activation space hypothesis. Interpreted in terms of classical and contemporary models of decision-making, our results suggest that the process of settling on an internal representation of a stimulus is itself partially constitutive of decision-making for object categorization.
APA, Harvard, Vancouver, ISO, and other styles
8

Csaszar, Felipe A., and James Ostler. "A Contingency Theory of Representational Complexity in Organizations." Organization Science 31, no. 5 (September 2020): 1198–219. http://dx.doi.org/10.1287/orsc.2019.1346.

Full text
Abstract:
A long-standing question in the organizations literature is whether firms are better off by using simple or complex representations of their task environment. We address this question by developing a formal model of how firm performance depends on the process by which firms learn and use representations. Building on ideas from cognitive science, our model conceptualizes this process in terms of how firms construct a representation of the environment and then use that representation when making decisions. Our model identifies the optimal level of representational complexity as a function of (a) the environment’s complexity and uncertainty and (b) the firm’s experience and knowledge about the environment’s deep structure. We use this model to delineate the conditions under which firms should use simple versus complex representations; in doing so, we provide a coherent framework that integrates previous conflicting results on which type of representation leaves firms better off. Among other results, we show that the optimal representational complexity generally depends more on the firm’s knowledge about the environment than it does on the environment’s actual complexity. We also show that the relative advantage of heuristics vis-à-vis more complex representations critically depends on an unstated assumption of “informedness”: that managers can know what are the most relevant variables to pay attention to. We show that when this assumption does not hold, complex representations are usually better than simpler ones.
APA, Harvard, Vancouver, ISO, and other styles
9

Spinks, Graham, and Marie-Francine Moens. "Structured (De)composable Representations Trained with Neural Networks." Computers 9, no. 4 (October 2, 2020): 79. http://dx.doi.org/10.3390/computers9040079.

Full text
Abstract:
This paper proposes a novel technique for representing templates and instances of concept classes. A template representation refers to the generic representation that captures the characteristics of an entire class. The proposed technique uses end-to-end deep learning to learn structured and composable representations from input images and discrete labels. The obtained representations are based on distance estimates between the distributions given by the class label and those given by contextual information, which are modeled as environments. We prove that the representations have a clear structure allowing decomposing the representation into factors that represent classes and environments. We evaluate our novel technique on classification and retrieval tasks involving different modalities (visual and language data). In various experiments, we show how the representations can be compressed and how different hyperparameters impact performance.
APA, Harvard, Vancouver, ISO, and other styles
10

Jha, Kishlay, Guangxu Xun, and Aidong Zhang. "Continual representation learning for evolving biomedical bipartite networks." Bioinformatics 37, no. 15 (February 3, 2021): 2190–97. http://dx.doi.org/10.1093/bioinformatics/btab067.

Full text
Abstract:
Abstract Motivation Many real-world biomedical interactions such as ‘gene-disease’, ‘disease-symptom’ and ‘drug-target’ are modeled as a bipartite network structure. Learning meaningful representations for such networks is a fundamental problem in the research area of Network Representation Learning (NRL). NRL approaches aim to translate the network structure into low-dimensional vector representations that are useful to a variety of biomedical applications. Despite significant advances, the existing approaches still have certain limitations. First, a majority of these approaches do not model the unique topological properties of bipartite networks. Consequently, their straightforward application to the bipartite graphs yields unsatisfactory results. Second, the existing approaches typically learn representations from static networks. This is limiting for the biomedical bipartite networks that evolve at a rapid pace, and thus necessitate the development of approaches that can update the representations in an online fashion. Results In this research, we propose a novel representation learning approach that accurately preserves the intricate bipartite structure, and efficiently updates the node representations. Specifically, we design a customized autoencoder that captures the proximity relationship between nodes participating in the bipartite bicliques (2 × 2 sub-graph), while preserving both the global and local structures. Moreover, the proposed structure-preserving technique is carefully interleaved with the central tenets of continual machine learning to design an incremental learning strategy that updates the node representations in an online manner. Taken together, the proposed approach produces meaningful representations with high fidelity and computational efficiency. Extensive experiments conducted on several biomedical bipartite networks validate the effectiveness and rationality of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
11

Guo, Junying, Xiaojiang Guo, Yangqing Liu, and Kar Ping Shum. "Germs and semigroup representation theory." Asian-European Journal of Mathematics 13, no. 06 (April 17, 2019): 2050109. http://dx.doi.org/10.1142/s1793557120501090.

Full text
Abstract:
Uniform representation of semigroups is introduced. It is proved that any uniform representation of an ample semigroup can be expressed as the direct sum of some representations obtained via homogenous representations on primitive adequate semigroups. Also, we give the structure of homogenous representations of primitive adequate semigroups. In addition, we consider indecomposable uniform representations of ample semigroups and their constructions.
APA, Harvard, Vancouver, ISO, and other styles
12

Natvig, David, and Joseph Salmons. "Connecting Structure and Variation in Sound Change." Cadernos de Linguística 2, no. 1 (May 15, 2021): 01–20. http://dx.doi.org/10.25189/2675-4916.2021.v2.n1.id314.

Full text
Abstract:
“Structured heterogeneity”, a founding concept of variationist sociolinguistics, puts focus on the ordered social differentiation in language. We extend the notion of structured heterogeneity to formal phonological structure, i.e., representations based on contrasts, with implications for phonetic implementation. Phonology establishes parameters for what varies and how. Patterns of stability and variability with respect to a given feature’s relationship to representations allow us to ground variationist analysis in a framework that makes predictions about potential sound changes: more structure correlates to more stability; less structure corresponds to more variability. However, even though all change requires variability, not all variability leads to change. Two case studies illustrate this asymmetry, keeping a focus on phonetic change with phonological stability. First, Germanic rhotics (r-sounds) from prehistory to the present day are minimally specified. They show tremendous phonetic variability and change but phonological stability. Second, laryngeal contrasts (voicing or aspiration) vary and change in language contact. We track the accumulation of phonetic change in unspecified members of pairs of the type spelled <s> ≠ <z>, etc. This analysis makes predictions about the regularity of sound change, situating regularity in phonology and irregularity in phonetics and the lexicon. Structured heterogeneity involves the variation inherent within the system for various levels of phonetic and phonological representation. Phonological change, then, is about acquiring or learning different abstract representations based on heterogeneous and variable input.
APA, Harvard, Vancouver, ISO, and other styles
13

Lee, Sang Hun, and Kunwoo Lee. "Partial Entity Structure: A Compact Boundary Representation for Non-Manifold Geometric Modeling." Journal of Computing and Information Science in Engineering 1, no. 4 (November 1, 2001): 356–65. http://dx.doi.org/10.1115/1.1433486.

Full text
Abstract:
Non-manifold boundary representations have become very popular in recent years and various representation schemes have been proposed, as they represent a wider range of objects, for various applications, than conventional manifold representations. As these schemes mainly focus on describing sufficient adjacency relationships of topological entities, the models represented in these schemes occupy storage space redundantly, although they are very efficient in answering queries on topological adjacency relationships. To solve this problem, in this paper, we propose a compact as well as fast non-manifold boundary representation, called the partial entity structure. This representation reduces the storage size to half that of the radial edge structure, which is one of the most popular and efficient of existing data structures, while allowing full topological adjacency relationships to be derived without loss of efficiency. In order to verify the time and storage efficiency of the partial entity structure, the time complexity of basic query procedures and the storage requirement for typical geometric models are derived and compared with those of existing schemes.
APA, Harvard, Vancouver, ISO, and other styles
14

Mitocariu, Elena. "Graph representations of discourse structure." International Journal of Advanced Intelligence Paradigms 8, no. 3 (2016): 243. http://dx.doi.org/10.1504/ijaip.2016.077490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Chiang, David, Aravind K. Joshi, and David B. Searls. "Grammatical Representations of Macromolecular Structure." Journal of Computational Biology 13, no. 5 (June 2006): 1077–100. http://dx.doi.org/10.1089/cmb.2006.13.1077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Caramazza, Alfonso, and Gabriele Miceli. "The structure of graphemic representations." Cognition 37, no. 3 (December 1990): 243–97. http://dx.doi.org/10.1016/0010-0277(90)90047-n.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Enochs, E., S. Estrada, and S. Özdemir. "Transfinite Tree Quivers and their Representations." MATHEMATICA SCANDINAVICA 112, no. 1 (March 1, 2013): 49. http://dx.doi.org/10.7146/math.scand.a-15232.

Full text
Abstract:
The idea of "vertex at the infinity" naturally appears when studying indecomposable injective representations of tree quivers. In this paper we formalize this behavior and find the structure of all the indecomposable injective representations of a tree quiver of size an arbitrary cardinal $\kappa$. As a consequence the structure of injective representations of noetherian $\kappa$-trees is completely determined. In the second part we will consider the problem whether arbitrary trees are source injective representation quivers or not.
APA, Harvard, Vancouver, ISO, and other styles
18

Sverdlova, Ol'ga, Larisa Kondrat'eva, and Nadezhda Dobrynina. "REPRESENTATION BY STRUCTURE OF SPANNING TREE." Bulletin of the Angarsk State Technical University 1, no. 14 (December 15, 2020): 271–73. http://dx.doi.org/10.36629/2686-777x-2020-1-14-271-273.

Full text
Abstract:
The most effective representation of the class of ordinary graphs (in the sense of in-formation capacity) is representation of the structure of the trees. The paper considers the different ways of the graphs assignments. The theorem on the possibility of networks representation by a tree structure is given. In proving this theorem the necessary and sufficient conditions compliance with their representation is formulated. The issues of representations transformation are considered. An example of network coding is given. The network can be set by any tree representation if this represen-tation sets a tree accurate within the numbering of all its vertices.
APA, Harvard, Vancouver, ISO, and other styles
19

REINEKE, MARCUS. "THE MONOID OF FAMILIES OF QUIVER REPRESENTATIONS." Proceedings of the London Mathematical Society 84, no. 3 (April 29, 2002): 663–85. http://dx.doi.org/10.1112/s0024611502013497.

Full text
Abstract:
A monoid structure on families of representations of a quiver is introduced by taking extensions of representations in families, that is, subvarieties of the varieties of representations. The study of this monoid leads to interesting interactions between representation theory, algebraic geometry and quantum group theory. For example, it produces a wealth of interesting examples of families of quiver representations, which can be analysed by representation-theoretic and geometric methods. Conversely, results from representation theory, in particular A. Schofield's work on general properties of quiver representations, allow us to relate the monoid to certain degenerate forms of quantized enveloping algebras.2000 Mathematical Subject Classification: 16G20, 14L30, 17B37.
APA, Harvard, Vancouver, ISO, and other styles
20

Franco-Ramírez, Julieta Armida, Carlos Enrique Cabrera-Pivaral, Gabriel Zárate-Guerrero, Sergio Alberto Franco-Chávez, María de los Ángeles Covarrubias-Bermúdez, and Marco Antonio Zavala-González. "Structure and content of the maternal representations of Mexican teenagers during their first pregnancy." Revista Brasileira de Saúde Materno Infantil 19, no. 4 (December 2019): 897–906. http://dx.doi.org/10.1590/1806-93042019000400009.

Full text
Abstract:
Abstract Objectives: understand the structure and content of the maternal representations of Mexican teenagers during their first pregnancy. Methods: a study was carried out with qualitative methodology based on the concept of maternal representation and the theory of social representations with 30 adolescents who attended prenatal control at the Civil Hospital of Guadalajara "Fray Antonio Alcalde", in Jalisco, Mexico. The participants were interviewed with the consent of their tutors. Classical content analysis techniques were used to obtain codes and thematic categories to develop a conceptual map that explains maternal representations. Results: the maternal representation was identified: "Pregnant but reunited, a legitimated bad decision", which was composed of social meanings towards adolescent pregnancy, family dynamics, expectations towards motherhood, and the feelings experienced by the adolescent during the pregnancy. The content of the representations was heterogeneous for most of the identified categories; however, it is identified that the desire for pregnancy guides the expectations of the adolescent about her future way of being as a mother. Conclusions: the desire of women for pregnancy, the level of participation of the couple, and the social meanings of adolescent pregnancy, have an outstanding role in the development of models of maternal representations.
APA, Harvard, Vancouver, ISO, and other styles
21

Gushchin, V. I., O. L. Kuvshinova, O. S. Shalina, P. Suedfeld, and Ph J. Johnson. "STRUCTURE OF AUTOBIOGRAPHIC REPRESENTATION OF VETERAN COSMONAUTS." Aerospace and Environmental Medicine 54, no. 5 (2020): 29–38. http://dx.doi.org/10.21687/0233-528x-2020-54-5-29-38.

Full text
Abstract:
The paper deals with the structural analysis of self-representations of cosmonauts who have finished their flight careers. Self-representation is a central formation of ego-consciousness while profession and family are two important spheres of the human realization and formation, line of development. The authors consider how veteran cosmonauts represent their families and professional selves, and place of career and family in their lives in context of the theories of activities (A.N. Leontiev) and self-determination (E. Deci, P. Ryan). We analyzed self-representations of 15 veterans enrolled in the corps of cosmonauts in 1970s, 1980s and 1990s. According to our data, different decades mark individual epochs, each with own social-historical context. Cosmonauts in these groups differ in perception of career success, family history, attitude to the life and personal mission at large. Work and family are unequal in their «weight» within the self-representation or self-consciousness structure.
APA, Harvard, Vancouver, ISO, and other styles
22

Меньшенин, В. В. "Магнитные фазовые переходы в несоизмеримую магнитную структуру в соединении FeGe-=SUB=-2-=/SUB=-." Физика твердого тела 61, no. 3 (2019): 552. http://dx.doi.org/10.21883/ftt.2019.03.47251.269.

Full text
Abstract:
AbstractA symmetry analysis of possible magnetic structures in an incommensurate magnetic phase in FeGe_2 compound, resulted from phase transitions from the paramagnetic phase, was performed based on a phenomenological consideration. It is shown that two possible approaches to a such an analysis, the first of which uses the magnetic representation of the space group, and the second one is based on the expansion of the magnetic moment in basis functions of irreducible representations of the space group of the paramagnetic phase, yield the same results. Space group irreducible representations are determined, according to which the transition to an incommensurate structure can occur. The set of these representations appears identical in both approaches. Ginzburg–Landau functionals for analyzing the transitions according to these representations are written. A renormalization group analysis of the second-order phase transitions from the paramagnetic state to the incommensurate magnetic structure is performed. It is shown that a helical magnetic structure can arise in the incommensurate phase as a result of two second-order phase transitions at the transitions temperature.
APA, Harvard, Vancouver, ISO, and other styles
23

Ta'aseh, Nevo, and Offer Shai. "Network Graph Theory Perspective on Skeletal Structures for Theoretical and Educational Purposes." International Journal of Mechanical Engineering Education 36, no. 4 (October 2008): 294–319. http://dx.doi.org/10.7227/ijmee.36.4.3.

Full text
Abstract:
The paper introduces an approach to the analysis of skeletal structures in which they are represented by a discrete mathematical model called graph representation. The paper shows that the reasoning upon the structure can be performed solely upon the representation, which, besides the theoretical value, presents a powerful educational tool. Students can learn skeletal structures entirely through the graph representations and derive advanced structural topics, including the conjugate theorem and the unit force method from the theorems and principles of network graph theory. The graph representations used in the paper for structures have also been applied to represent systems from different engineering disciplines. This provides students with a multidisciplinary perspective on analysis of engineering systems in general, and skeletal structures in particular.
APA, Harvard, Vancouver, ISO, and other styles
24

Castel, Philippe, Rachel Morlot, and Marie-Françoise Lacassagne. "On Methods of Access to the Structure of Social Representations: the Example of Europe." Spanish journal of psychology 15, no. 3 (November 2012): 1222–32. http://dx.doi.org/10.5209/rev_sjop.2012.v15.n3.39409.

Full text
Abstract:
The aim of this study is to identify the logic behind a range of statistical methods used to reveal the structure of social representations. Subjects (N = 317) were asked to answer the following question: “For each category of European person, please indicate which other European he would most like to have contact with”. The results of the similarity analysis lead us to the conclusion that there is an ethnocentric bias, and reveal the central factor of the representation. The representation obtained by factorial correspondence analysis seems closer to current reality and enables us to understand the divisions that have structured Europe and remained embedded in the subjects. Thus, the choice of analytical method is not merely anecdotal, given that representations obtained from the same data can vary immensely.
APA, Harvard, Vancouver, ISO, and other styles
25

Chen, Yuhao, Alexander Wong, Yuan Fang, Yifan Wu, and Linlin Xu. "Deep Residual Transform for Multi-scale Image Decomposition." Journal of Computational Vision and Imaging Systems 6, no. 1 (January 15, 2021): 1–5. http://dx.doi.org/10.15353/jcvis.v6i1.3537.

Full text
Abstract:
Multi-scale image decomposition (MID) is a fundamental task in computer vision and image processing that involves the transformation of an image into a hierarchical representation comprising of different levels of visual granularity from coarse structures to fine details. A well-engineered MID disentangles the image signal into meaningful components which can be used in a variety of applications such as image denoising, image compression, and object classification. Traditional MID approaches such as wavelet transforms tackle the problem through carefully designed basis functions under rigid decomposition structure assumptions. However, as the information distribution varies from one type of image content to another, rigid decomposition assumptions lead to inefficiently representation, i.e., some scales can contain little to no information. To address this issue, we present Deep Residual Transform (DRT), a data-driven MID strategy where the input signal is transformed into a hierarchy of non-linear representations at different scales, with each representation being independently learned as the representational residual of previous scales at a user-controlled detail level. As such, the proposed DRT progressively disentangles scale information from the original signal by sequentially learning residual representations. The decomposition flexibility of this approach allows for highly tailored representations cater to specific types of image content, and results in greater representational efficiency and compactness. In this study, we realize the proposed transform by leveraging a hierarchy of sequentially trained autoencoders. To explore the efficacy of the proposed DRT, we leverage two datasets comprising of very different types of image content: 1) CelebFaces and 2) Cityscapes. Experimental results show that the proposed DRT achieved highly efficient information decomposition on both datasets amid their very different visual granularity characteristics.
APA, Harvard, Vancouver, ISO, and other styles
26

Fedorenko, Evelina, Josh H. McDermott, Sam Norman-Haignere, and Nancy Kanwisher. "Sensitivity to musical structure in the human brain." Journal of Neurophysiology 108, no. 12 (December 15, 2012): 3289–300. http://dx.doi.org/10.1152/jn.00209.2012.

Full text
Abstract:
Evidence from brain-damaged patients suggests that regions in the temporal lobes, distinct from those engaged in lower-level auditory analysis, process the pitch and rhythmic structure in music. In contrast, neuroimaging studies targeting the representation of music structure have primarily implicated regions in the inferior frontal cortices. Combining individual-subject fMRI analyses with a scrambling method that manipulated musical structure, we provide evidence of brain regions sensitive to musical structure bilaterally in the temporal lobes, thus reconciling the neuroimaging and patient findings. We further show that these regions are sensitive to the scrambling of both pitch and rhythmic structure but are insensitive to high-level linguistic structure. Our results suggest the existence of brain regions with representations of musical structure that are distinct from high-level linguistic representations and lower-level acoustic representations. These regions provide targets for future research investigating possible neural specialization for music or its associated mental processes.
APA, Harvard, Vancouver, ISO, and other styles
27

Кузич and Marina Kuzich. "Intellectual Representations in Students’ Activities." Socio-Humanitarian Research and Technology 3, no. 3 (September 10, 2014): 39–40. http://dx.doi.org/10.12737/6228.

Full text
Abstract:
This article deals with the problem of intellectual representations. The author questions how to teach modern students, on which structure of intelligence to rely considering the development of science and society in general. Cognitive and metacognitive structures of students’ intelligence are considered as well. Intellectual abilities are presented: learning, convergence and creativity that expresscognitive and metacognitive structure of intelligence of students. Mental experience, which reflects these intellectual abilities, is considered.
APA, Harvard, Vancouver, ISO, and other styles
28

SALAMOURA, ANGELIKI, and JOHN N. WILLIAMS. "Processing verb argument structure across languages: Evidence for shared representations in the bilingual lexicon." Applied Psycholinguistics 28, no. 4 (September 28, 2007): 627–60. http://dx.doi.org/10.1017/s0142716407070348.

Full text
Abstract:
Although the organization of first language (L1) and second language (L2) lexicosemantic information has been extensively studied in the bilingual literature, little evidence exists concerning how syntactic information associated with words is represented across languages. The present study examines the shared or independent nature of the representation of verb argument structure in the bilingual mental lexicon and the contribution of constituent order and thematic role information in these representations. In three production tasks, Greek (L1) advanced learners of English (L2) generated an L1 prime structure (Experiment 1: prepositional object [PO] and double object [DO] structures; Experiment 2: PO, DO, and intransitive structures; Experiment 3: PO, DO, locative, and “provide (someone) with (something)” structures) before completing an L2 target structure (PO or DO only). Experiment 1 showed L1-to-L2 syntactic priming; participants tended to reuse L1 structure when producing L2 utterances. Experiments 2 and 3 showed that this tendency was contingent on the combination of both syntactic structure and thematic roles up to the first postverbal argument. Based on these findings, we outline a model of shared representations of syntactic and thematic information for L1 and L2 verbs in the bilingual lexicon.
APA, Harvard, Vancouver, ISO, and other styles
29

Dong, Bin, Songlei Jian, and Kai Lu. "Learning Multimodal Representations by Symmetrically Transferring Local Structures." Symmetry 12, no. 9 (September 13, 2020): 1504. http://dx.doi.org/10.3390/sym12091504.

Full text
Abstract:
Multimodal representations play an important role in multimodal learning tasks, including cross-modal retrieval and intra-modal clustering. However, existing multimodal representation learning approaches focus on building one common space by aligning different modalities and ignore the complementary information across the modalities, such as the intra-modal local structures. In other words, they only focus on the object-level alignment and ignore structure-level alignment. To tackle the problem, we propose a novel symmetric multimodal representation learning framework by transferring local structures across different modalities, namely MTLS. A customized soft metric learning strategy and an iterative parameter learning process are designed to symmetrically transfer local structures and enhance the cluster structures in intra-modal representations. The bidirectional retrieval loss based on multi-layer neural networks is utilized to align two modalities. MTLS is instantiated with image and text data and shows its superior performance on image-text retrieval and image clustering. MTLS outperforms the state-of-the-art multimodal learning methods by up to 32% in terms of R@1 on text-image retrieval and 16.4% in terms of AMI onclustering.
APA, Harvard, Vancouver, ISO, and other styles
30

PĂUNESCU, LIVIU. "A convex structure on sofic embeddings." Ergodic Theory and Dynamical Systems 34, no. 4 (March 14, 2013): 1343–52. http://dx.doi.org/10.1017/etds.2012.193.

Full text
Abstract:
AbstractNathanial Brown [Topological dynamical systems associated to ${\mathit{II}}_{1} $-factors. Adv. Math. 227(4), 1665–1699] introduced a convex-like structure on the set of unitary equivalence classes of unital *-homomorphisms of a separable type ${\mathit{II}}_{1} $ factor into ${R}^{\omega } $ (ultrapower of the hyperfinite factor). The goal of this paper is to introduce such a structure on the set of sofic representations of groups. We prove that if the commutant of a representation acts ergodically on the Loeb measure space then that representation is an extreme point.
APA, Harvard, Vancouver, ISO, and other styles
31

Schürmann, Michael. "A class of representations of involutive bialgebras." Mathematical Proceedings of the Cambridge Philosophical Society 107, no. 1 (January 1990): 149–75. http://dx.doi.org/10.1017/s0305004100068432.

Full text
Abstract:
AbstractA class of representations on Fock space is associated to a representation of the *-algebra structure of a cocommutative graded bialgebra with an involution. We prove that the Gelfand–Naimark–Segal (GNS) representation given by the convolution exponential of a conditionally positive linear functional can be embedded into a representation of this class. Our theory generalizes a well-known construction for infinitely divisible positive definite functions on a group. Applying our general result, we obtain a complete characterization of the GNS representations given by infinitely divisible states on involutive Lie superalgebras.
APA, Harvard, Vancouver, ISO, and other styles
32

NOJIRI, SHIN'ICHI. "GROUP-THEORETICAL STRUCTURE OF N = 1 AND N = 2 TWO-FORM SUPERGRAVITY." International Journal of Modern Physics A 11, no. 27 (October 30, 1996): 4907–19. http://dx.doi.org/10.1142/s0217751x96002248.

Full text
Abstract:
We clarify the group-theoretical structure of N = 1 and N = 2 two-form supergravity, which is classically equivalent to the Einstein supergravity. N = 1 and N = 2 two-form supergravity theories can be formulated as gauge theories. By introducing two Grassmann variables θA (A = 1, 2), we construct the explicit representations of the generators Qi of the gauge group, which makes it possible to express any product of the generators as a linear combination of the generators [Formula: see text]. By using the expression and the tensor product representation, we show how to construct finite-dimensional representations of the gauge groups. Based on these representations, we construct the Lagrangians of N = 1 and N = 2 two-form supergravity theories.
APA, Harvard, Vancouver, ISO, and other styles
33

Stewart, Russell S., Chao Huang, Megan T. Arnett, and Tansu Celikel. "Spontaneous oscillations in intrinsic signals reveal the structure of cerebral vasculature." Journal of Neurophysiology 109, no. 12 (June 15, 2013): 3094–104. http://dx.doi.org/10.1152/jn.01200.2011.

Full text
Abstract:
Functional imaging of intrinsic signals allows minimally invasive spatiotemporal mapping of stimulus representations in the cortex, but representations are often corrupted by stimulus-independent spatial artifacts, especially those originating from the blood vessels. In this paper, we present novel algorithms for unsupervised identification of cerebral vascularization, allowing blind separation of stimulus representations from noise. These algorithms commonly take advantage of the temporal fluctuations in global reflectance to extract anatomic information. More specifically, the phase of low-frequency oscillations relative to global fluctuations reveals local vascular identity. Arterioles can be reconstructed using their characteristically high power in those frequencies corresponding to respiration, heartbeat, and vasomotion signals. By treating the vasculature as a dynamic flow network, we finally demonstrate that direction of blood perfusion can be quantitatively visualized. Application of these methods for removal of stimulus-independent changes in reflectance permits isolation of stimulus-evoked representations even if the representation spatially overlaps with blood vessels. The algorithms can be expanded further to extract temporal information on blood flow, monitor revascularization following a focal stroke, and distinguish arterioles from venules and parenchyma.
APA, Harvard, Vancouver, ISO, and other styles
34

Rachkovskij, Dmitri A., and Ernst M. Kussul. "Binding and Normalization of Binary Sparse Distributed Representations by Context-Dependent Thinning." Neural Computation 13, no. 2 (February 2001): 411–52. http://dx.doi.org/10.1162/089976601300014592.

Full text
Abstract:
Distributed representations were often criticized as inappropriate for encoding of data with a complex structure. However Plate's holographic reduced representations and Kanerva's binary spatter codes are recent schemes that allow on-the-fly encoding of nested compositional structures by real-valued or dense binary vectors of fixed dimensionality. In this article we consider procedures of the context-dependent thinning developed for representation of complex hierarchical items in the architecture of associative-projective neural networks. These procedures provide binding of items represented by sparse binary codevectors (with low probability of 1s). Such an encoding is biologically plausible and allows a high storage capacity of distributed associative memory where the codevectors may be stored. In contrast to known binding procedures, context-dependent thinning preserves the same low density (or sparseness) of the bound codevector for a varied number of component codevectors. Besides, a bound codevector is similar not only to another one with similar component codevectors (as in other schemes) but also to the component codevectors themselves. This allows the similarity of structures to be estimated by the overlap of their codevectors, without retrieval of the component codevectors. This also allows easy retrieval of the component codevectors. Examples of algorithmic and neural network implementations of the thinning procedures are considered. We also present representation examples for various types of nested structured data (propositions using role filler and predicate arguments schemes, trees, and directed acyclic graphs) using sparse codevectors of fixed dimension. Such representations may provide a fruitful alternative to the symbolic representations of traditional artificial intelligence as well as to the localist and microfeature-based connectionist representations.
APA, Harvard, Vancouver, ISO, and other styles
35

Xing, Wei, Shireen Elhabian, Robert Kirby, Ross T. Whitaker, and Shandian Zhe. "Infinite ShapeOdds: Nonparametric Bayesian Models for Shape Representations." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6462–69. http://dx.doi.org/10.1609/aaai.v34i04.6118.

Full text
Abstract:
Learning compact representations for shapes (binary images) is important for many applications. Although neural network models are very powerful, they usually involve many parameters, require substantial tuning efforts and easily overfit small datasets, which are common in shape-related applications. The state-of-the-art approach, ShapeOdds, as a latent Gaussian model, can effectively prevent overfitting and is more robust. Nonetheless, it relies on a linear projection assumption and is incapable of capturing intrinsic nonlinear shape variations, hence may leading to inferior representations and structure discovery. To address these issues, we propose Infinite ShapeOdds (InfShapeOdds), a Bayesian nonparametric shape model, which is flexible enough to capture complex shape variations and discover hidden cluster structures, while still avoiding overfitting. Specifically, we use matrix Gaussian priors, nonlinear feature mappings and the kernel trick to generalize ShapeOdds to a shape-variate Gaussian process model, which can grasp various nonlinear correlations among the pixels within and across (different) shapes. To further discover the hidden structures in data, we place a Dirichlet process mixture (DPM) prior over the representations to jointly infer the cluster number and memberships. Finally, we exploit the Kronecker-product structure in our model to develop an efficient, truncated variational expectation-maximization algorithm for model estimation. On synthetic and real-world data, we show the advantage of our method in both representation learning and latent structure discovery.
APA, Harvard, Vancouver, ISO, and other styles
36

Rives, Alexander, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, et al. "Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences." Proceedings of the National Academy of Sciences 118, no. 15 (April 5, 2021): e2016239118. http://dx.doi.org/10.1073/pnas.2016239118.

Full text
Abstract:
In the field of artificial intelligence, a combination of scale in data and model capacity enabled by unsupervised learning has led to major advances in representation learning and statistical generation. In the life sciences, the anticipated growth of sequencing promises unprecedented data on natural sequence diversity. Protein language modeling at the scale of evolution is a logical step toward predictive and generative artificial intelligence for biology. To this end, we use unsupervised learning to train a deep contextual language model on 86 billion amino acids across 250 million protein sequences spanning evolutionary diversity. The resulting model contains information about biological properties in its representations. The representations are learned from sequence data alone. The learned representation space has a multiscale organization reflecting structure from the level of biochemical properties of amino acids to remote homology of proteins. Information about secondary and tertiary structure is encoded in the representations and can be identified by linear projections. Representation learning produces features that generalize across a range of applications, enabling state-of-the-art supervised prediction of mutational effect and secondary structure and improving state-of-the-art features for long-range contact prediction.
APA, Harvard, Vancouver, ISO, and other styles
37

Carroll, J. "The Deep Structure of Literary Representations." Evolution and Human Behavior 20, no. 3 (May 1999): 159–73. http://dx.doi.org/10.1016/s1090-5138(99)00004-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Robinson, Daniel D., Thomas W. Barlow, and W. Graham Richards. "Reduced Dimensional Representations of Molecular Structure." Journal of Chemical Information and Computer Sciences 37, no. 5 (September 1997): 939–42. http://dx.doi.org/10.1021/ci970424l.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Bakas, Ioannis, and Elias B. Kiritsis. "Structure and Representations of theW∞Algebra." Progress of Theoretical Physics Supplement 102 (1990): 15–37. http://dx.doi.org/10.1143/ptps.102.15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Madl, Tamas, Stan Franklin, Ke Chen, Robert Trappl, and Daniela Montaldi. "Exploring the Structure of Spatial Representations." PLOS ONE 11, no. 6 (June 27, 2016): e0157343. http://dx.doi.org/10.1371/journal.pone.0157343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Browne, Antony. "Detecting systematic structure in distributed representations." Neural Networks 11, no. 5 (July 1998): 815–24. http://dx.doi.org/10.1016/s0893-6080(98)00052-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Ferk, Vesna, Margareta Vrtacnik, Andrej Blejec, and Alenka Gril. "Students' understanding of molecular structure representations." International Journal of Science Education 25, no. 10 (October 2003): 1227–45. http://dx.doi.org/10.1080/0950069022000038231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Bocklandt, Raf, and Stijn Symens. "The Local Structure of Graded Representations." Communications in Algebra 34, no. 12 (December 2006): 4401–26. http://dx.doi.org/10.1080/00927870600938365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Kelly, Matthew A., Dorothea Blostein, and D. J. K. Mewhort. "Encoding structure in holographic reduced representations." Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale 67, no. 2 (June 2013): 79–93. http://dx.doi.org/10.1037/a0030301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

LEE, WAN-JUI, VERONIKA CHEPLYGINA, DAVID M. J. TAX, MARCO LOOG, and ROBERT P. W. DUIN. "BRIDGING STRUCTURE AND FEATURE REPRESENTATIONS IN GRAPH MATCHING." International Journal of Pattern Recognition and Artificial Intelligence 26, no. 05 (August 2012): 1260005. http://dx.doi.org/10.1142/s0218001412600051.

Full text
Abstract:
Structures and features are opposite approaches in building representations for object recognition. Bridging the two is an essential problem in pattern recognition as the two opposite types of information are fundamentally different. As dissimilarities can be computed for both the dissimilarity representation can be used to combine the two. Attributed graphs contain structural as well as feature-based information. Neglecting the attributes yields a pure structural description. Isolating the features and neglecting the structure represents objects by a bag of features. In this paper we will show that weighted combinations of dissimilarities may perform better than these two extremes, indicating that these two types of information are essentially different and strengthen each other. In addition we present two more advanced integrations than weighted combining and show that these may improve the classification performances even further.
APA, Harvard, Vancouver, ISO, and other styles
46

Rubanets, O. "COGNITIVE APPROACH OF MENTAL REALITY." Bulletin of Taras Shevchenko National University of Kyiv. Philosophy, no. 3 (2018): 27–31. http://dx.doi.org/10.17721/2523-4064.2018/3-6/12.

Full text
Abstract:
Conceptualization of the peculiarities of interaction of social and mental representation, revealing features of mental reality, establishing the ontological status of objects of mental reality. Conceptualization of the relationship between social and mental representations is realized. The structure of the representation hierarchy was revealed, the relationship between the representation hierarchy and the mode of being the objects of mental reality was clarified, the role of mental and social representations in the formation of mental reality was revealed. The significance of mental representations in preserving the autonomy of the individual as the basis of a democratic society is revealed. For the first time, the mental reality was investigated on the basis of the interrelation of social and mental representations. A feature of the ontological status of objects of mental reality is determined. Taking into account the peculiarities of mental representation is the basis to research the relationship between social and mental representations. The study of the relationship between social and mental representations can be used in the social practices of a democratic society.
APA, Harvard, Vancouver, ISO, and other styles
47

Schnelle, Helmut. "Fuster’s Cherries and Linguistic Trees." European Review 16, no. 4 (October 2008): 483–95. http://dx.doi.org/10.1017/s1062798708000409.

Full text
Abstract:
Bridging the gap between linguistic structure representations and neurocognitive representations is a difficult challenge. This article presents an outline of how a formally specified system of constituent structure grammar could be translated into a distributed hierarchy network of associated modules. A set of syntactic constituent rule units could be reinterpreted as complex neuronal modules, which interactively generate momentary binding associations. The iterating binding activity in the network corresponds to the syntactic structure representation of a given sentence.
APA, Harvard, Vancouver, ISO, and other styles
48

Edfors, Inger, Susanne Wikman, Brita Johansson Cederblad, and Cedric Linder. "University Students’ Reflections on Representations in Genetics and Stereochemistry Revealed by a Focus Group Approach." Nordic Studies in Science Education 11, no. 2 (May 26, 2015): 169–79. http://dx.doi.org/10.5617/nordina.2044.

Full text
Abstract:
Genetics and organic chemistry are areas of science that students regard as difficult to learn. Part of this difficulty is derived from the disciplines having representations as part of their discourses. In order to optimally support students’ meaning-making, teachers need to use representations to structure the meaning-making experience in thoughtful ways that consider the variation in students’ prior knowledge. Using a focus group setting, we explored 43 university students’ reasoning on representations in introductory chemistry and genetics courses. Our analysis of eight focus group discussions revealed how students can construct somewhat bewildered relations with disciplinary-specific representations. The students stated that they preferred familiar representations, but without asserting the meaning-making affordances of those representations. Also, the students were highly aware of the affordances of certain representations, but nonetheless chose not to use those representations in their problem solving. We suggest that an effective representation is one that, to some degree, is familiar to the students, but at the same time is challenging and not too closely related to “the usual one”. The focus group discussions led the students to become more aware of their own and others ways of interpreting different representations. Furthermore, feedback from the students’ focus group discussions enhanced the teachers’ awareness of the students’ prior knowledge and limitations in students’ representational literacy. Consequently, we posit that a focus group setting can be used in a university context to promote both student meaning-making and teacher professional development in a fruitful way.
APA, Harvard, Vancouver, ISO, and other styles
49

Li, Zun, and Michael Wellman. "Structure Learning for Approximate Solution of Many-Player Games." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 02 (April 3, 2020): 2119–27. http://dx.doi.org/10.1609/aaai.v34i02.5586.

Full text
Abstract:
Games with many players are difficult to solve or even specify without adopting structural assumptions that enable representation in compact form. Such structure is generally not given and will not hold exactly for particular games of interest. We introduce an iterative structure-learning approach to search for approximate solutions of many-player games, assuming only black-box simulation access to noisy payoff samples. Our first algorithm, K-Roles, exploits symmetry by learning a role assignment for players of the game through unsupervised learning (clustering) methods. Our second algorithm, G3L, seeks sparsity by greedy search over local interactions to learn a graphical game model. Both algorithms use supervised learning (regression) to fit payoff values to the learned structures, in compact representations that facilitate equilibrium calculation. We experimentally demonstrate the efficacy of both methods in reaching quality solutions and uncovering hidden structure, on both perfectly and approximately structured game instances.
APA, Harvard, Vancouver, ISO, and other styles
50

Chornozhuk, S. "The New Geometric “State-Action” Space Representation for Q-Learning Algorithm for Protein Structure Folding Problem." Cybernetics and Computer Technologies, no. 3 (October 27, 2020): 59–73. http://dx.doi.org/10.34229/2707-451x.20.3.6.

Full text
Abstract:
Introduction. The spatial protein structure folding is an important and actual problem in computational biology. Considering the mathematical model of the task, it can be easily concluded that finding an optimal protein conformation in a three dimensional grid is a NP-hard problem. Therefore some reinforcement learning techniques such as Q-learning approach can be used to solve the problem. The article proposes a new geometric “state-action” space representation which significantly differs from all alternative representations used for this problem. The purpose of the article is to analyze existing approaches of different states and actions spaces representations for Q-learning algorithm for protein structure folding problem, reveal their advantages and disadvantages and propose the new geometric “state-space” representation. Afterwards the goal is to compare existing and the proposed approaches, make conclusions with also describing possible future steps of further research. Result. The work of the proposed algorithm is compared with others on the basis of 10 known chains with a length of 48 first proposed in [16]. For each of the chains the Q-learning algorithm with the proposed “state-space” representation outperformed the same Q-learning algorithm with alternative existing “state-space” representations both in terms of average and minimal energy values of resulted conformations. Moreover, a plenty of existing representations are used for a 2D protein structure predictions. However, during the experiments both existing and proposed representations were slightly changed or developed to solve the problem in 3D, which is more computationally demanding task. Conclusion. The quality of the Q-learning algorithm with the proposed geometric “state-action” space representation has been experimentally confirmed. Consequently, it’s proved that the further research is promising. Moreover, several steps of possible future research such as combining the proposed approach with deep learning techniques has been already suggested. Keywords: Spatial protein structure, combinatorial optimization, relative coding, machine learning, Q-learning, Bellman equation, state space, action space, basis in 3D space.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography