Siga este enlace para ver otros tipos de publicaciones sobre el tema: Generative Semantik.

Artículos de revistas sobre el tema "Generative Semantik"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Generative Semantik".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Zhao, Xiaojie, Yuming Shen, Shidong Wang y Haofeng Zhang. "Boosting Generative Zero-Shot Learning by Synthesizing Diverse Features with Attribute Augmentation". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 3 (28 de junio de 2022): 3454–62. http://dx.doi.org/10.1609/aaai.v36i3.20256.

Texto completo
Resumen
The recent advance in deep generative models outlines a promising perspective in the realm of Zero-Shot Learning (ZSL). Most generative ZSL methods use category semantic attributes plus a Gaussian noise to generate visual features. After generating unseen samples, this family of approaches effectively transforms the ZSL problem into a supervised classification scheme. However, the existing models use a single semantic attribute, which contains the complete attribute information of the category. The generated data also carry the complete attribute information, but in reality, visual samples usually have limited attributes. Therefore, the generated data from attribute could have incomplete semantics. Based on this fact, we propose a novel framework to boost ZSL by synthesizing diverse features. This method uses augmented semantic attributes to train the generative model, so as to simulate the real distribution of visual features. We evaluate the proposed model on four benchmark datasets, observing significant performance improvement against the state-of-the-art.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Jun, Hee-Gook y Dong-Hyuk Im. "Semantics-Preserving RDB2RDF Data Transformation Using Hierarchical Direct Mapping". Applied Sciences 10, n.º 20 (12 de octubre de 2020): 7070. http://dx.doi.org/10.3390/app10207070.

Texto completo
Resumen
Direct mapping is an automatic transformation method used to generate resource description framework (RDF) data from relational data. In the field of direct mapping, semantics preservation is critical to ensure that the mapping method outputs RDF data without information loss or incorrect semantic data generation. However, existing direct-mapping methods have problems that prevent semantics preservation in specific cases. For this reason, a mapping method is developed to perform a semantics-preserving transformation of relational databases (RDB) into RDF data without semantic information loss and to reduce the volume of incorrect RDF data. This research reviews cases that do not generate semantics-preserving results, and the corresponding problems into categories are arranged. This paper defines lemmas that represent the features of RDF data transformation to resolve those problems. Based on the lemmas, this work develops a hierarchical direct-mapping method to strictly abide by the definition of semantics preservation and to prevent semantic information loss, reducing the volume of incorrect RDF data generated. Experiments demonstrate the capability of the proposed method to perform semantics-preserving RDB2RDF data transformation, generating semantically accurate results. This work impacts future studies, which should involve the development of synchronization methods to achieve RDF data consistency when original RDB data are modified.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Sun, Jingxiang, Xuan Wang, Yichun Shi, Lizhen Wang, Jue Wang y Yebin Liu. "IDE-3D". ACM Transactions on Graphics 41, n.º 6 (30 de noviembre de 2022): 1–10. http://dx.doi.org/10.1145/3550454.3555506.

Texto completo
Resumen
Existing 3D-aware facial generation methods face a dilemma in quality versus editability: they either generate editable results in low resolution, or high-quality ones with no editing flexibility. In this work, we propose a new approach that brings the best of both worlds together. Our system consists of three major components: (1) a 3D-semantics-aware generative model that produces view-consistent, disentangled face images and semantic masks; (2) a hybrid GAN inversion approach that initializes the latent codes from the semantic and texture encoder, and further optimizes them for faithful reconstruction; and (3) a canonical editor that enables efficient manipulation of semantic masks in canonical view and produces high-quality editing results. Our approach is competent for many applications, e.g. free-view face drawing, editing and style control. Both quantitative and qualitative results show that our method reaches the state-of-the-art in terms of photorealism, faithfulness and efficiency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Saquil, Yassir, Qun-Ce Xu, Yong-Liang Yang y Peter Hall. "Rank3DGAN: Semantic Mesh Generation Using Relative Attributes". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 04 (3 de abril de 2020): 5586–94. http://dx.doi.org/10.1609/aaai.v34i04.6011.

Texto completo
Resumen
In this paper, we investigate a novel problem of using generative adversarial networks in the task of 3D shape generation according to semantic attributes. Recent works map 3D shapes into 2D parameter domain, which enables training Generative Adversarial Networks (GANs) for 3D shape generation task. We extend these architectures to the conditional setting, where we generate 3D shapes with respect to subjective attributes defined by the user. Given pairwise comparisons of 3D shapes, our model performs two tasks: it learns a generative model with a controlled latent space, and a ranking function for the 3D shapes based on their multi-chart representation in 2D. The capability of the model is demonstrated with experiments on HumanShape, Basel Face Model and reconstructed 3D CUB datasets. We also present various applications that benefit from our model, such as multi-attribute exploration, mesh editing, and mesh attribute transfer.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Koktová, E. "Focus in generative grammar Michael S. Rochemont, Joachim Jacobs, Focus and Skalen: Zur Syntax and Semantik der Gradpartikeln im Deutschen Marit R. Westergaard, Definite NP Anaphora: A Pragmatic Approach". Journal of Pragmatics 12, n.º 2 (abril de 1988): iv. http://dx.doi.org/10.1016/0378-2166(88)90063-x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Yang, Ximing, Yuan Wu, Kaiyi Zhang y Cheng Jin. "CPCGAN: A Controllable 3D Point Cloud Generative Adversarial Network with Semantic Label Generating". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 4 (18 de mayo de 2021): 3154–62. http://dx.doi.org/10.1609/aaai.v35i4.16425.

Texto completo
Resumen
Generative Adversarial Networks (GAN) are good at generating variant samples of complex data distributions. Generating a sample with certain properties is one of the major tasks in the real-world application of GANs. In this paper, we propose a novel generative adversarial network to generate 3D point clouds from random latent codes, named Controllable Point Cloud Generative Adversarial Network(CPCGAN). A two-stage GAN framework is utilized in CPCGAN and a sparse point cloud containing major structural information is extracted as the middle-level information between the two stages. With their help, CPCGAN has the ability to control the generated structure and generate 3D point clouds with semantic labels for points. Experimental results demonstrate that the proposed CPCGAN outperforms state-of-the-art point cloud GANs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Cipriani, Enrico. "Semantics in generative grammar". Lingvisticæ Investigationes. International Journal of Linguistics and Language Resources 42, n.º 2 (31 de diciembre de 2019): 134–85. http://dx.doi.org/10.1075/li.00033.cip.

Texto completo
Resumen
Abstract I provide a critical survey of the role that semantics took in the several models of generative grammar, since the 1950s until the Minimalist Program. I distinguish four different periods. In the first section, I focus on the role of formal semantics in generative grammar until the 1970s. In Section 2 I present the period of linguistic wars, when the role of semantics in linguistic theory became a crucial topic of debate. In Section 3 I focus on the formulation of conditions on transformations and Binding Theory in the 1970s and 1980s, while in the last Section I discuss the role of semantics in the minimalist approach. In this section, I also propose a semantically-based model of generative grammar, which fully endorses minimalism and Chomsky’s later position concerning the primary role of the semantic interface in the Universal Grammar modelization (Strong Minimalist Thesis). In the Discussion, I point out some theoretical problems deriving from Chomsky’s internalist interpretation of model-theoretic semantics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Yang, Guan, Ayou Han, Xiaoming Liu, Yang Liu, Tao Wei y Zhiyuan Zhang. "Enhancing Semantic-Consistent Features and Transforming Discriminative Features for Generalized Zero-Shot Classifications". Applied Sciences 12, n.º 24 (9 de diciembre de 2022): 12642. http://dx.doi.org/10.3390/app122412642.

Texto completo
Resumen
Generalized zero-shot learning (GZSL) aims to classify classes that do not appear during training. Recent state-of-the-art approaches rely on generative models, which use correlating semantic embeddings to synthesize unseen classes visual features; however, these approaches ignore the semantic and visual relevance, and visual features synthesized by generative models do not represent their semantics well. Although existing GZSL methods based on generative model disentanglement consider consistency between visual and semantic models, these methods consider semantic consistency only in the training phase and ignore semantic consistency in the feature synthesis and classification phases. The absence of such constraints may lead to an unrepresentative synthesized visual model with respect to semantics, and the visual and semantic features are not modally well aligned, thus causing the bias between visual and semantic features. Therefore, an approach for GZSL is proposed to enhance semantic-consistent features and discriminative features transformation (ESTD-GZSL). The proposed method can enhance semantic-consistent features at all stages of GZSL. A semantic decoder module is first added to the VAE to map synthetic and real features to the corresponding semantic embeddings. This regularization method allows synthesizing unseen classes for a more representative visual representation, and synthetic features can better represent their semantics. Then, the semantic-consistent features decomposed by the disentanglement module and the features output by the semantic decoder are transformed into enhanced semantic-consistent discriminative features and used in classification to reduce the ambiguity between categories. The experimental results show that our proposed method achieves more competitive results on four benchmark datasets (AWA2, CUB, FLO, and APY) of GZSL.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Yao, Xuchen, Gosse Bouma y Yi Zhang. "Semantics-based Question Generation and Implementation". Dialogue & Discourse 3, n.º 2 (16 de marzo de 2012): 11–42. http://dx.doi.org/10.5087/dad.2012.202.

Texto completo
Resumen
This paper presents a question generation system based on the approach of semantic rewriting. The state-of-the-art deep linguistic parsing and generation tools are employed to convert (back and forth) between the natural language sentences and their meaning representations in the form of Minimal Recursion Semantics (MRS). By carefully operating on the semantic structures, we show a principled way of generating questions without ad-hoc manipulation of the syntactic structures. Based on the (partial) understanding of the sentence meaning, the system generates questions which are semantically grounded and purposeful. And with the support of deep linguistic grammars, the grammaticality of the generation results is warranted. Further, with a specialized ranking model, the linguistic realizations from the general purpose generation model are further refined for our the question generation task. The evaluation results from QGSTEC2010 show promising prospects of the proposed approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Zhao, Jiaojiao, Jungong Han, Ling Shao y Cees G. M. Snoek. "Pixelated Semantic Colorization". International Journal of Computer Vision 128, n.º 4 (7 de diciembre de 2019): 818–34. http://dx.doi.org/10.1007/s11263-019-01271-4.

Texto completo
Resumen
AbstractWhile many image colorization algorithms have recently shown the capability of producing plausible color versions from gray-scale photographs, they still suffer from limited semantic understanding. To address this shortcoming, we propose to exploit pixelated object semantics to guide image colorization. The rationale is that human beings perceive and distinguish colors based on the semantic categories of objects. Starting from an autoregressive model, we generate image color distributions, from which diverse colored results are sampled. We propose two ways to incorporate object semantics into the colorization model: through a pixelated semantic embedding and a pixelated semantic generator. Specifically, the proposed network includes two branches. One branch learns what the object is, while the other branch learns the object colors. The network jointly optimizes a color embedding loss, a semantic segmentation loss and a color generation loss, in an end-to-end fashion. Experiments on Pascal VOC2012 and COCO-stuff reveal that our network, when trained with semantic segmentation labels, produces more realistic and finer results compared to the colorization state-of-the-art.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Zhang, Zeqing, Xiaofan Li, Tai Ma, Zuodong Gao, Cuihua Li y Weiwei Lin. "Residual-Prototype Generating Network for Generalized Zero-Shot Learning". Mathematics 10, n.º 19 (1 de octubre de 2022): 3587. http://dx.doi.org/10.3390/math10193587.

Texto completo
Resumen
Conventional zero-shot learning aims to train a classifier on a training set (seen classes) to recognize instances of novel classes (unseen classes) by class-level semantic attributes. In generalized zero-shot learning (GZSL), the classifier needs to recognize both seen and unseen classes, which is a problem of extreme data imbalance. To solve this problem, feature generative methods have been proposed to make up for the lack of unseen classes. Current generative methods use class semantic attributes as the cues for synthetic visual features, which can be considered mapping of the semantic attribute to visual features. However, this mapping cannot effectively transfer knowledge learned from seen classes to unseen classes because the information in the semantic attributes and the information in visual features are asymmetric: semantic attributes contain key category description information, while visual features consist of visual information that cannot be represented by semantics. To this end, we propose a residual-prototype-generating network (RPGN) for GZSL that extracts the residual visual features from original visual features by an encoder–decoder and synthesizes the prototype visual features associated with semantic attributes by a disentangle regressor. Experimental results show that the proposed method achieves competitive results on four GZSL benchmark datasets with significant gains.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Ke, Qingchao y Jian Lin. "Dynamic Generation of Knowledge Graph Supporting STEAM Learning Theme Design". Applied Sciences 12, n.º 21 (30 de octubre de 2022): 11001. http://dx.doi.org/10.3390/app122111001.

Texto completo
Resumen
Instructional framework based on a knowledge graph makes up for the interdisciplinary theme design ability of teachers in a single discipline, to some extent, and provides a curriculum-oriented theme generation path for STEAM instructional design. This study proposed a dynamic completion model of a knowledge graph based on the subject semantic tensor decomposition. This model can be based on the tensor calculation of multi-disciplinary curriculum standard knowledge semantics to provide more reasonable STEAM project-based learning themes for teachers of those subjects. First, the STEAM multi-disciplinary knowledge semantic dataset was generated through the course’s standard text and open-source encyclopedia data. Next, based on the semantic tensor decomposition of specific STEAM topics, the dynamic generation of knowledge graphs was realized, providing interdisciplinary STEAM learning topic sequences for teachers of a single discipline. Finally, the application experiment of generating STEAM learning themes proved the effectiveness of our model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Chen, Guo, Yin-Dong Zheng, Limin Wang y Tong Lu. "DCAN: Improving Temporal Action Detection via Dual Context Aggregation". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 1 (28 de junio de 2022): 248–57. http://dx.doi.org/10.1609/aaai.v36i1.19900.

Texto completo
Resumen
Temporal action detection aims to locate the boundaries of action in the video. The current method based on boundary matching enumerates and calculates all possible boundary matchings to generate proposals. However, these methods neglect the long-range context aggregation in boundary prediction. At the same time, due to the similar semantics of adjacent matchings, local semantic aggregation of densely-generated matchings cannot improve semantic richness and discrimination. In this paper, we propose the end-to-end proposal generation method named Dual Context Aggregation Network (DCAN) to aggregate context on two levels, namely, boundary level and proposal level, for generating high-quality action proposals, thereby improving the performance of temporal action detection. Specifically, we design the Multi-Path Temporal Context Aggregation (MTCA) to achieve smooth context aggregation on boundary level and precise evaluation of boundaries. For matching evaluation, Coarse-to-fine Matching (CFM) is designed to aggregate context on the proposal level and refine the matching map from coarse to fine. We conduct extensive experiments on ActivityNet v1.3 and THUMOS-14. DCAN obtains an average mAP of 35.39% on ActivityNet v1.3 and reaches mAP 54.14% at IoU@0.5 on THUMOS-14, which demonstrates DCAN can generate high-quality proposals and achieve state-of-the-art performance. We release the code at https://github.com/cg1177/DCAN.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Bau, David, Jun-Yan Zhu, Hendrik Strobelt, Agata Lapedriza, Bolei Zhou y Antonio Torralba. "Understanding the role of individual units in a deep neural network". Proceedings of the National Academy of Sciences 117, n.º 48 (1 de septiembre de 2020): 30071–78. http://dx.doi.org/10.1073/pnas.1907375117.

Texto completo
Resumen
Deep neural networks excel at finding hierarchical representations that solve complex tasks over large datasets. How can we humans understand these learned representations? In this work, we present network dissection, an analytic framework to systematically identify the semantics of individual hidden units within image classification and image generation networks. First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts. We find evidence that the network has learned many object classes that play crucial roles in classifying scene classes. Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes. By analyzing changes made when small sets of units are activated or deactivated, we find that objects can be added and removed from the output scenes while adapting to the context. Finally, we apply our analytic framework to understanding adversarial attacks and to semantic image editing.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Zhang, Xiang y Qiang Yang. "Transfer Hierarchical Attention Network for Generative Dialog System". International Journal of Automation and Computing 16, n.º 6 (16 de octubre de 2019): 720–36. http://dx.doi.org/10.1007/s11633-019-1200-0.

Texto completo
Resumen
Abstract In generative dialog systems, learning representations for the dialog context is a crucial step in generating high quality responses. The dialog systems are required to capture useful and compact information from mutually dependent sentences such that the generation process can effectively attend to the central semantics. Unfortunately, existing methods may not effectively identify importance distributions for each lower position when computing an upper level feature, which may lead to the loss of information critical to the constitution of the final context representations. To address this issue, we propose a transfer learning based method named transfer hierarchical attention network (THAN). The THAN model can leverage useful prior knowledge from two related auxiliary tasks, i.e., keyword extraction and sentence entailment, to facilitate the dialog representation learning for the main dialog generation task. During the transfer process, the syntactic structure and semantic relationship from the auxiliary tasks are distilled to enhance both the word-level and sentence-level attention mechanisms for the dialog system. Empirically, extensive experiments on the Twitter Dialog Corpus and the PERSONA-CHAT dataset demonstrate the effectiveness of the proposed THAN model compared with the state-of-the-art methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Li, Linyan, Yu Sun, Fuyuan Hu, Tao Zhou, Xuefeng Xi y Jinchang Ren. "Text to Realistic Image Generation with Attentional Concatenation Generative Adversarial Networks". Discrete Dynamics in Nature and Society 2020 (26 de octubre de 2020): 1–10. http://dx.doi.org/10.1155/2020/6452536.

Texto completo
Resumen
In this paper, we propose an Attentional Concatenation Generative Adversarial Network (ACGAN) aiming at generating 1024 × 1024 high-resolution images. First, we propose a multilevel cascade structure, for text-to-image synthesis. During training progress, we gradually add new layers and, at the same time, use the results and word vectors from the previous layer as inputs to the next layer to generate high-resolution images with photo-realistic details. Second, the deep attentional multimodal similarity model is introduced into the network, and we match word vectors with images in a common semantic space to compute a fine-grained matching loss for training the generator. In this way, we can pay attention to the fine-grained information of the word level in the semantics. Finally, the measure of diversity is added to the discriminator, which enables the generator to obtain more diverse gradient directions and improve the diversity of generated samples. The experimental results show that the inception scores of the proposed model on the CUB and Oxford-102 datasets have reached 4.48 and 4.16, improved by 2.75% and 6.42% compared to Attentional Generative Adversarial Networks (AttenGAN). The ACGAN model has a better effect on text-generated images, and the resulting image is closer to the real image.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Ibrahem, Hatem, Ahmed Salem y Hyun-Soo Kang. "Exploration of Semantic Label Decomposition and Dataset Size in Semantic Indoor Scenes Synthesis via Optimized Residual Generative Adversarial Networks". Sensors 22, n.º 21 (29 de octubre de 2022): 8306. http://dx.doi.org/10.3390/s22218306.

Texto completo
Resumen
In this paper, we revisit the paired image-to-image translation using the conditional generative adversarial network, the so-called "Pix2Pix", and propose efficient optimization techniques for the architecture and the training method to maximize the architecture’s performance to boost the realism of the generated images. We propose a generative adversarial network-based technique to create new artificial indoor scenes using a user-defined semantic segmentation map as an input to define the location, shape, and category of each object in the scene, exactly similar to Pix2Pix. We train different residual connections-based architectures of the generator and discriminator on the NYU depth-v2 dataset and a selected indoor subset from the ADE20K dataset, showing that the proposed models have fewer parameters, less computational complexity, and can generate better quality images than the state of the art methods following the same technique to generate realistic indoor images. We also prove that using extra specific labels and more training samples increases the quality of the generated images; however, the proposed residual connections-based models can learn better from small datasets (i.e., NYU depth-v2) and can improve the realism of the generated images in training on bigger datasets (i.e., ADE20K indoor subset) in comparison to Pix2Pix. The proposed method achieves an LPIPS value of 0.505 and an FID value of 81.067, generating better quality images than that produced by Pix2Pix and other recent paired Image-to-image translation methods and outperforming them in terms of LPIPS and FID.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Bureš, Lukáš, Ivan Gruber, Petr Neduchal, Miroslav Hlaváč y Marek Hrúz. "Semantic Text Segmentation from Synthetic Images of Full-Text Documents". SPIIRAS Proceedings 18, n.º 6 (2 de diciembre de 2019): 1381–406. http://dx.doi.org/10.15622/sp.2019.18.6.1381-1406.

Texto completo
Resumen
An algorithm (divided into multiple modules) for generating images of full-text documents is presented. These images can be used to train, test, and evaluate models for Optical Character Recognition (OCR). The algorithm is modular, individual parts can be changed and tweaked to generate desired images. A method for obtaining background images of paper from already digitized documents is described. For this, a novel approach based on Variational AutoEncoder (VAE) to train a generative model was used. These backgrounds enable the generation of similar background images as the training ones on the fly.The module for printing the text uses large text corpora, a font, and suitable positional and brightness character noise to obtain believable results (for natural-looking aged documents). A few types of layouts of the page are supported. The system generates a detailed, structured annotation of the synthesized image. Tesseract OCR to compare the real-world images to generated images is used. The recognition rate is very similar, indicating the proper appearance of the synthetic images. Moreover, the errors which were made by the OCR system in both cases are very similar. From the generated images, fully-convolutional encoder-decoder neural network architecture for semantic segmentation of individual characters was trained. With this architecture, the recognition accuracy of 99.28% on a test set of synthetic documents is reached.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Chen, Anpei, Ruiyang Liu, Ling Xie, Zhang Chen, Hao Su y Jingyi Yu. "SofGAN: A Portrait Image Generator with Dynamic Styling". ACM Transactions on Graphics 41, n.º 1 (28 de febrero de 2022): 1–26. http://dx.doi.org/10.1145/3470848.

Texto completo
Resumen
Recently, Generative Adversarial Networks (GANs) have been widely used for portrait image generation. However, in the latent space learned by GANs, different attributes, such as pose, shape, and texture style, are generally entangled, making the explicit control of specific attributes difficult. To address this issue, we propose a SofGAN image generator to decouple the latent space of portraits into two subspaces: a geometry space and a texture space. The latent codes sampled from the two subspaces are fed to two network branches separately, one to generate the 3D geometry of portraits with canonical pose, and the other to generate textures. The aligned 3D geometries also come with semantic part segmentation, encoded as a semantic occupancy field (SOF). The SOF allows the rendering of consistent 2D semantic segmentation maps at arbitrary views, which are then fused with the generated texturemaps and stylized to a portrait photo using our semantic instance-wise module. Through extensive experiments, we show that our system can generate high-quality portrait images with independently controllable geometry and texture attributes. The method also generalizes well in various applications, such as appearance-consistent facial animation and dynamic styling.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Teimi, Cherif. "Causativization in Arabic: Evidence for the interface between semantics and morpho-phonology". International Journal of Language and Literary Studies 4, n.º 4 (29 de diciembre de 2022): 139–57. http://dx.doi.org/10.36892/ijlls.v4i4.1097.

Texto completo
Resumen
Meaning is derived through the interaction of the components of the linguistic system. As established within the Parallel Architecture Framework (Jackendoff 1997), the linguistic system is composed of components considered equal in terms of producing meaning. In other words, linguistic components are related to each other via interface rules and principles so that they cooperate to derive meaning. In this regard, Morpho-phonological processes constitute the interface between morpho-phonology and semantics. Morphological and phonological features of a word bear on its semantic interpretation. In this article, I deal with Causativization in Modern Standard Arabic (MSA, henceforth), representing a pure phenomenon for the morpho-phonology-semantics interface. Causative verbs in MSA provide good insights into this issue. Adopting Jackendoff’s Conceptual Semantics framework proves that morphology is an autonomous generative component that can generate some aspects of meaning either independently or in cooperation with phonology and/ or other linguistic components; therefore, this proves the interface between morpho-phonology and semantics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Mao, Xiaofeng, Shuhui Wang, Liying Zheng y Qingming Huang. "Semantic invariant cross-domain image generation with generative adversarial networks". Neurocomputing 293 (junio de 2018): 55–63. http://dx.doi.org/10.1016/j.neucom.2018.02.092.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Harris, Randy Allen. "The origin and developmemt of generative semantics". Historiographia Linguistica 20, n.º 2-3 (1 de enero de 1993): 399–440. http://dx.doi.org/10.1075/hl.20.2-3.07har.

Texto completo
Resumen
Summary Against the background of the controversial and polarized work of Frederick Newmeyer and Robin Tolmach Lakoff, this paper chronicles the early development of generative semantics, an internal movement within the transformational model of Chomsky’s Aspects of the Theory of Syntax. The first suggestions toward the movement, whose cornerstone was the obliteration of the syntax-semantics boundary, were by George Lakoff in 1963. But it was the work conducted under the informal banner of “Abstract Syntax” by Paul Postal that began the serious investigations leading to such an obliteration. Lakoff was an active participant in that research, as were Robin Tolmach Lakoff, John Robert (“Háj”) Ross and James D. McCawley. Through their combined efforts, particularly those of McCawley on semantic primitives and lexical insertion, generative semantics took shape in 1967: positing a universal base, importing notions from predicate calculus, decomposing lexical structure, and, most contentiously, rejecting the central element of the Aspects model, deep structure.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Singh, Anjali, Ruhi Sharma Mittal, Shubham Atreja, Mourvi Sharma, Seema Nagar, Prasenjit Dey y Mohit Jain. "Automatic Generation of Leveled Visual Assessments for Young Learners". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 9713–20. http://dx.doi.org/10.1609/aaai.v33i01.33019713.

Texto completo
Resumen
Images are an essential tool for communicating with children, particularly at younger ages when they are still developing their emergent literacy skills. Hence, assessments that use images to assess their conceptual knowledge and visual literacy, are an important component of their learning process. Creating assessments at scale is a challenging task, which has led to several techniques being proposed for automatic generation of textual assessments. However, none of them focuses on generating image-based assessments. To understand the manual process of creating visual assessments, we interviewed primary school teachers. Based on the findings from the preliminary study, we present a novel approach which uses image semantics to generate visual multiple choice questions (VMCQs) for young learners, wherein options are presented in the form of images. We propose a metric to measure the semantic similarity between two images, which we use to identify the four options – one answer and three distractor images – for a given question. We also use this metric for generating VMCQs at two difficulty levels – easy and hard. Through a quantitative evaluation, we show that the system-generated VMCQs are comparable to VMCQs created by experts, hence establishing the effectiveness of our approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Karimova, Muslima. "THE ROLE OF SEMANTICS IN LINGUISTIC COMPETENCE". CURRENT RESEARCH JOURNAL OF PHILOLOGICAL SCIENCES 02, n.º 11 (1 de noviembre de 2021): 128–34. http://dx.doi.org/10.37547/philological-crjps-02-11-28.

Texto completo
Resumen
In second language acquisition, semantics assumes a heightened position of importance in regards to achieving target levels of competence. Assuming that the target competence is equal to a native-speaker's, a non-native speaker will never fully reach a native level of semantic competence. The ways that L1 semantics can affect L2 semantics is an area of interest in both cognitive and generative linguistics. Research from both perspectives is valuable to evaluate semantics at the lexical and syntactic level.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Li, Zihao, Daobing Zhang, Yang Wang, Daoyu Lin y Jinghua Zhang. "Generative Adversarial Networks for Zero-Shot Remote Sensing Scene Classification". Applied Sciences 12, n.º 8 (8 de abril de 2022): 3760. http://dx.doi.org/10.3390/app12083760.

Texto completo
Resumen
Deep learning-based methods succeed in remote sensing scene classification (RSSC). However, current methods require training on a large dataset, and if a class does not appear in the training set, it does not work well. Zero-shot classification methods are designed to address the classification for unseen category images in which the generative adversarial network (GAN) is a popular method. Thus, our approach aims to achieve the zero-shot RSSC based on GAN. We employed the conditional Wasserstein generative adversarial network (WGAN) to generate image features. Since remote sensing images have inter-class similarity and intra-class diversity, we introduced classification loss, semantic regression module, and class-prototype loss to constrain the generator. The classification loss was used to preserve inter-class discrimination. We used the semantic regression module to ensure that the image features generated by the generator can represent the semantic features. We introduced class-prototype loss to ensure the intra-class diversity of the synthesized image features and avoid generating too homogeneous image features. We studied the effect of different semantic embeddings for zero-shot RSSC. We performed experiments on three datasets, and the experimental results show that our method performs better than the state-of-the-art methods in zero-shot RSSC in most cases.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Williams, Edwin. "Generative Semantics, Generative Morphosyntax". Syntax 16, n.º 1 (12 de septiembre de 2012): 77–108. http://dx.doi.org/10.1111/j.1467-9612.2012.00173.x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Ni, Huan, Jocelyn Chanussot, Xiaonan Niu, Hong Tang y Haiyan Guan. "Reverse Difference Network for Highlighting Small Objects in Aerial Images". ISPRS International Journal of Geo-Information 11, n.º 9 (18 de septiembre de 2022): 494. http://dx.doi.org/10.3390/ijgi11090494.

Texto completo
Resumen
The large-scale variation issue in high-resolution aerial images significantly lowers the accuracy of segmenting small objects. For a deep-learning-based semantic segmentation model, the main reason is that the deeper layers generate high-level semantics over considerably large receptive fields, thus improving the accuracy for large objects but ignoring small objects. Although the low-level features extracted by shallow layers contain small-object information, large-object information has predominant effects. When the model, using low-level features, is trained, the large objects push the small objects aside. This observation motivates us to propose a novel reverse difference mechanism (RDM). The RDM eliminates the predominant effects of large objects and highlights small objects from low-level features. Based on the RDM, a novel semantic segmentation method called the reverse difference network (RDNet) is designed. In the RDNet, a detailed stream is proposed to produce small-object semantics by enhancing the output of RDM. A contextual stream for generating high-level semantics is designed by fully accumulating contextual information to ensure the accuracy of the segmentation of large objects. Both high-level and small-object semantics are concatenated when the RDNet performs predictions. Thus, both small- and large-object information is depicted well. Two semantic segmentation benchmarks containing vital small objects are used to fully evaluate the performance of the RDNet. Compared with existing methods that exhibit good performance in segmenting small objects, the RDNet has lower computational complexity and achieves 3.9–18.9% higher accuracy in segmenting small objects.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Miao, Y., X. Tang y Z. Wang. "AN AUTOMATIC SEMANTIC MAP GENERATION METHOD USING TRAJECTORY DATA". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B4-2020 (24 de agosto de 2020): 63–67. http://dx.doi.org/10.5194/isprs-archives-xliii-b4-2020-63-2020.

Texto completo
Resumen
Abstract. It’s easily to obtain the geometric information of terrain features in a timely manner using advanced surveying and mapping methods, but it is impossible to obtain their semantic information with low latency due to the rapid development of cities. The popularity of GPS-enabled devices and technologies provide us a large number of personal location information. Moreover, it is possible to extract the personal or group behavior pattern due to the regularity of human behavior. Those conditions make it possible to extract and identify human behavior patterns from their trajectory data. In this paper, we present an automatic semantic map generation method that extract semantic patterns and take advantage of them to tagging spatial objects in an unknown region based on known semantic patterns. We study the regularity of trajectory data and build the semantic pattern based on the regularity of human behavior. Most importantly, we use known semantic patterns to identify the semantics of the stay points in the unknown region, and use this method to realize the semantic recognition of the stay points. Results of the experiments show the effectiveness of our proposed method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Paul, William, I.-Jeng Wang, Fady Alajaji y Philippe Burlina. "Unsupervised Discovery, Control, and Disentanglement of Semantic Attributes With Applications to Anomaly Detection". Neural Computation 33, n.º 3 (marzo de 2021): 802–26. http://dx.doi.org/10.1162/neco_a_01359.

Texto completo
Resumen
Our work focuses on unsupervised and generative methods that address the following goals: (1) learning unsupervised generative representations that discover latent factors controlling image semantic attributes, (2) studying how this ability to control attributes formally relates to the issue of latent factor disentanglement, clarifying related but dissimilar concepts that had been confounded in the past, and (3) developing anomaly detection methods that leverage representations learned in the first goal. For goal 1, we propose a network architecture that exploits the combination of multiscale generative models with mutual information (MI) maximization. For goal 2, we derive an analytical result, lemma 1 , that brings clarity to two related but distinct concepts: the ability of generative networks to control semantic attributes of images they generate, resulting from MI maximization, and the ability to disentangle latent space representations, obtained via total correlation minimization. More specifically, we demonstrate that maximizing semantic attribute control encourages disentanglement of latent factors. Using lemma 1 and adopting MI in our loss function, we then show empirically that for image generation tasks, the proposed approach exhibits superior performance as measured in the quality and disentanglement of the generated images when compared to other state-of-the-art methods, with quality assessed via the Fréchet inception distance (FID) and disentanglement via mutual information gap. For goal 3, we design several systems for anomaly detection exploiting representations learned in goal 1 and demonstrate their performance benefits when compared to state-of-the-art generative and discriminative algorithms. Our contributions in representation learning have potential applications in addressing other important problems in computer vision, such as bias and privacy in AI.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Ma, Ting-Huai, Xin Yu y Huan Rong. "A comprehensive transfer news headline generation method based on semantic prototype transduction". Mathematical Biosciences and Engineering 20, n.º 1 (2022): 1195–228. http://dx.doi.org/10.3934/mbe.2023055.

Texto completo
Resumen
<abstract> <p>Most current deep learning-based news headline generation models only target domain-specific news data. When a new news domain appears, it is usually costly to obtain a large amount of data with reference truth on the new domain for model training, so text generation models trained by traditional supervised approaches often do not generalize well on the new domain—inspired by the idea of transfer learning, this paper designs a cross-domain transfer text generation method based on domain data distribution alignment, intermediate domain redistribution, and zero-shot learning semantic prototype transduction, focusing on the data problem with no reference truth in the target domain. Eventually, the model can be guided by the most relevant source domain data to generate headlines from the target domain news text through the semantic correlation between source and target domain data during the training process of generating headlines for the target domain news, even without any reference truth of the news headlines in the target domain, which improves the usability of the text generation model in real scenarios. The experimental results show that the proposed transfer text generation method has a good domain transfer effect and outperforms other existing transfer text generation methods in various text generation evaluation indexes, proving the proposed method's effectiveness in this paper.</p> </abstract>
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

HUANG, YAN y LIANZHEN HE. "Automatic generation of short answer questions for reading comprehension assessment". Natural Language Engineering 22, n.º 3 (13 de enero de 2016): 457–89. http://dx.doi.org/10.1017/s1351324915000455.

Texto completo
Resumen
AbstractWriting items for reading comprehension assessment is time-consuming. Automating part of the process can help test-designers to develop assessments more efficiently and consistently. This paper presents an approach to automatically generating short answer questions for reading comprehension assessment. Our major contribution is to introduce Lexical Functional Grammar (LFG) as the linguistic framework for question generation, which enables systematic utilization of semantic and syntactic information. The approach can efficiently generate questions of better quality than previous high-performing question generation systems, and uses paraphrasing and sentence selection to improve the cognitive complexity and effectiveness of questions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Wang, Shaonan, Jiajun Zhang, Nan Lin y Chengqing Zong. "Probing Brain Activation Patterns by Dissociating Semantics and Syntax in Sentences". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 9201–8. http://dx.doi.org/10.1609/aaai.v34i05.6457.

Texto completo
Resumen
The relation between semantics and syntax and where they are represented in the neural level has been extensively debated in neurosciences. Existing methods use manually designed stimuli to distinguish semantic and syntactic information in a sentence that may not generalize beyond the experimental setting. This paper proposes an alternative framework to study the brain representation of semantics and syntax. Specifically, we embed the highly-controlled stimuli as objective functions in learning sentence representations and propose a disentangled feature representation model (DFRM) to extract semantic and syntactic information in sentences. This model can generate one semantic and one syntactic vector for each sentence. Then we associate these disentangled feature vectors with brain imaging data to explore brain representation of semantics and syntax. Results have shown that semantic feature is represented more robustly than syntactic feature across the brain including the default-mode, frontoparietal, visual networks, etc.. The brain representations of semantics and syntax are largely overlapped, but there are brain regions only sensitive to one of them. For instance, several frontal and temporal regions are specific to the semantic feature; parts of the right superior frontal and right inferior parietal gyrus are specific to the syntactic feature.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Yu, Chong. "Attention Based Data Hiding with Generative Adversarial Networks". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 01 (3 de abril de 2020): 1120–28. http://dx.doi.org/10.1609/aaai.v34i01.5463.

Texto completo
Resumen
Recently, the generative adversarial network is the hotspot in research and industrial areas. Its application on data generation is the most common usage. In this paper, we propose the novel end-to-end framework to extend its application to data hiding area. The discriminative model simulates the detection process, which can help us understand the sensitivity of the cover image to semantic changes. The generative model is to generate the target image which is aligned with the original cover image. An attention model is introduced to generate the attention mask. This mask can help to generate a better target image without perturbation of the spotlight. The introduction of cycle discriminative model and inconsistent loss can help to enhance the quality of the generated target image in the iterative training process. The training dataset is mixed with intact images and attacked images. The mix training process can further improve robustness. Through the qualitative, quantitative experiments and analysis, this novel framework shows compelling performance and advantages over the current state-of-the-art methods in data hiding applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Singh, Rajdeep. "Derivational Grammar Model and Basket Verb: A Novel Approach to the Inflectional Phrase in the Generative Grammar and Cognitive Processing". English Linguistics Research 7, n.º 2 (10 de junio de 2018): 9. http://dx.doi.org/10.5430/elr.v7n2p9.

Texto completo
Resumen
Generative grammar was a true revolution in the linguistics. However, to describe language behavior in its semantic essence and universal aspects, generative grammar needs to have a much richer semantic basis. In this paper, we took a novel morpho-syntactic approach to the inflectional phrase to account for the very diverse inflectional phrase qualities in different languages. Some languages show a very different surface verbal inflection, providing evidence of a different mental processing at the semantic level. In fact, the inflectional phrase is a great representative of the mental and semantic processing layers in mind. Therefore, in this study, we analyzed the inflectional phrase with a novel approach to take into account this rich verbal inflectional configuration in languages, and to describe why some languages behave in a different way in the spatial and temporal aspect. In this study, we analyzed and discussed the verbal inflectional structure of several languages, including German, Swahili, Persian, English, and Indonesian, and our result is the introduction of a semantic model which provides a much richer insight to the semantics/syntax interplay.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Miyata, Takashi y Yuji Matsumoto. "Natural Language Generation for Legal Expert System and Visualization of Generation Process". Journal of Advanced Computational Intelligence and Intelligent Informatics 2, n.º 1 (20 de febrero de 1998): 26–33. http://dx.doi.org/10.20965/jaciii.1998.p0026.

Texto completo
Resumen
An HPSG-based grammar and a sentence generation system for a small set of Japanese in legal expert domains are constructed. The system adopts its own general semantic system in which a domain-specific logical form is converted. This separation between domain-specific and linguistic semantics gives flexibility to both task processing and sentence generation. We also propose a visualization system which shows the generation process in a tabular form and operates as a graphical user interface for grammar debugging.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Usmani, A. U., M. Jadidi y G. Sohn. "TOWARDS THE AUTOMATIC ONTOLOGY GENERATION AND ALIGNMENT OF BIM AND GIS DATA FORMATS". ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences VIII-4/W2-2021 (7 de octubre de 2021): 183–88. http://dx.doi.org/10.5194/isprs-annals-viii-4-w2-2021-183-2021.

Texto completo
Resumen
Abstract. Establishing semantic interoperability between BIM and GIS is vital for geospatial information exchange. Semantic web have a natural ability to provide seamless semantic representation and integration among the heterogeneous domains like BIM and GIS through employing ontology. Ontology models can be defined (or generated) using domain-data representations and further aligned across other ontologies by the semantic similarity of their entities - introducing cross-domain ontologies to achieve interoperability of heterogeneous information. However, due to extensive semantic features and complex alignment (mapping) relations between BIM and GIS data formats, many approaches are far from generating semantically-rich ontologies and perform effective alignment to address geospatial interoperability. This study highlights the fundamental perspectives to be addressed for BIM and GIS interoperability and proposes a comprehensive conceptual framework for automatic ontology generation followed by ontology alignment of open-standards for BIM and GIS data formats. It presents an approach based on transformation patterns to automatically generate ontology models, and semantic-based and structure-based alignment techniques to form cross-domain ontology. Proposed two-phase framework provides ontology model generation for input XML schemas (i.e. of IFC and CityGML formats), and illustrates alignment technique to potentially develop a cross-domain ontology. The study concludes anticipated results of cross-domain ontology can provides future perspectives in knowledge-discovery applications and seamless information exchange for BIM and GIS.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Zhang, Zicheng, Yinglu Liu, Congying Han, Hailin Shi, Tiande Guo y Bowen Zhou. "PetsGAN: Rethinking Priors for Single Image Generation". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 3 (28 de junio de 2022): 3408–16. http://dx.doi.org/10.1609/aaai.v36i3.20251.

Texto completo
Resumen
Single image generation (SIG), described as generating diverse samples that have the same visual content as the given natural image, is first introduced by SinGAN, which builds a pyramid of GANs to progressively learn the internal patch distribution of the single image. It shows excellent performance in a wide range of image manipulation tasks. However, SinGAN has some limitations. Firstly, due to lack of semantic information, SinGAN cannot handle the object images well as it does on the scene and texture images. Secondly, the independent progressive training scheme is time-consuming and easy to cause artifacts accumulation. To tackle these problems, in this paper, we dig into the single image generation problem and improve SinGAN by fully-utilization of internal and external priors. The main contributions of this paper include: 1) We interpret single image generation from the perspective of the general generative task, that is, to learn a diverse distribution from the Dirac distribution composed of a single image. In order to solve this non-trivial problem, we construct a regularized latent variable model to formulate SIG. To the best of our knowledge, it is the first time to give a clear formulation and optimization goal of SIG, and all the existing methods for SIG can be regarded as special cases of this model. 2) We design a novel Prior-based end-to-end training GAN (PetsGAN), which is infused with internal prior and external prior to overcome the problems of SinGAN. For one thing, we employ the pre-trained GAN model to inject external prior for image generation, which can alleviate the problem of lack of semantic information and generate natural, reasonable and diverse samples, even for the object image. For another, we fully-utilize the internal prior by a differential Patch Matching module and an effective reconstruction network to generate consistent and realistic texture. 3) We construct abundant of qualitative and quantitative experiments on three datasets. The experimental results show our method surpasses other methods on both generated image quality, diversity, and training speed. Moreover, we apply our method to other image manipulation tasks (e.g., style transfer, harmonization) and the results further prove the effectiveness and efficiency of our method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Liu, Xiaojian, Qian Lei y Kehong Liu. "A Graph-Based Feature Generation Approach in Android Malware Detection with Machine Learning Techniques". Mathematical Problems in Engineering 2020 (27 de mayo de 2020): 1–15. http://dx.doi.org/10.1155/2020/3842094.

Texto completo
Resumen
An explosive spread of Android malware causes a serious concern for Android application security. One of the solutions to detecting malicious payloads sneaking in an application is to treat the detection as a binary classification problem, which can be effectively tackled with traditional machine learning techniques. The key factors in detecting Android malware with machine learning techniques are feature selection and generation. Most of the existing approaches select and generate features without fully examining the structures of programs, and thus the important semantic information associated with these features is lost, consequently resulting in a low accuracy rate in detection. To address this issue, we propose a new feature generation approach for Android applications, which takes components and program structures into consideration and extracts features in a graph-based and semantics-rich style. This approach highlights two major distinguishing aspects: the context-based feature selection and graph-based feature generation. We abstract an Android application as a collection of reduced iCFGs (interprocedural control flow graphs) and extract original features from these graphs. Combining the original features and their contexts together, we generate new features which hold richer semantic information than the original ones. By embedding the features into a feature vector space, we can use machine learning techniques to train a malware detector. The experiment results show that this approach achieves an accuracy rate of 95.4% and a recall rate of 96.5%, which prove the effectiveness and advantages of our approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Urešová, Zdeňka, Eva Fučíková, Eva Hajičová y Jan Hajič. "Meaning and Semantic Roles in CzEngClass Lexicon". Journal of Linguistics/Jazykovedný casopis 70, n.º 2 (1 de diciembre de 2019): 403–11. http://dx.doi.org/10.2478/jazcas-2019-0069.

Texto completo
Resumen
Abstract This paper focuses on Semantic Roles, an important component of studies in lexical semantics, as they are captured as part of a bilingual (Czech-English) synonym lexicon called CzEngClass. This lexicon builds upon the existing valency lexicons included within the framework of the annotation of the various Prague Dependency Treebanks. The present analysis of Semantic Roles is being approached from the Functional Generative Description point of view and supported by the textual evidence taken specifically from the Prague Czech-English Dependency Treebank.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Ma, Xiyao, Qile Zhu, Yanlin Zhou y Xiaolin Li. "Improving Question Generation with Sentence-Level Semantic Matching and Answer Position Inferring". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 05 (3 de abril de 2020): 8464–71. http://dx.doi.org/10.1609/aaai.v34i05.6366.

Texto completo
Resumen
Taking an answer and its context as input, sequence-to-sequence models have made considerable progress on question generation. However, we observe that these approaches often generate wrong question words or keywords and copy answer-irrelevant words from the input. We believe that lacking global question semantics and exploiting answer position-awareness not well are the key root causes. In this paper, we propose a neural question generation model with two general modules: sentence-level semantic matching and answer position inferring. Further, we enhance the initial state of the decoder by leveraging the answer-aware gated fusion mechanism. Experimental results demonstrate that our model outperforms the state-of-the-art (SOTA) models on SQuAD and MARCO datasets. Owing to its generality, our work also improves the existing models significantly.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Zhang, Susu, Jiancheng Ni, Lijun Hou, Zili Zhou, Jie Hou y Feng Gao. "Global-Affine and Local-Specific Generative Adversarial Network for semantic-guided image generation". Mathematical Foundations of Computing 4, n.º 3 (2021): 145. http://dx.doi.org/10.3934/mfc.2021009.

Texto completo
Resumen
<p style='text-indent:20px;'>The recent progress in learning image feature representations has opened the way for tasks such as label-to-image or text-to-image synthesis. However, one particular challenge widely observed in existing methods is the difficulty of synthesizing fine-grained textures and small-scale instances. In this paper, we propose a novel Global-Affine and Local-Specific Generative Adversarial Network (GALS-GAN) to explicitly construct global semantic layouts and learn distinct instance-level features. To achieve this, we adopt the graph convolutional network to calculate the instance locations and spatial relationships from scene graphs, which allows our model to obtain the high-fidelity semantic layouts. Also, a local-specific generator, where we introduce the feature filtering mechanism to separately learn semantic maps for different categories, is utilized to disentangle and generate specific visual features. Moreover, we especially apply a weight map predictor to better combine the global and local pathways considering the highly complementary between these two generation sub-networks. Extensive experiments on the COCO-Stuff and Visual Genome datasets demonstrate the superior generation performance of our model against previous methods, our approach is more capable of capturing photo-realistic local characteristics and rendering small-sized entities with more details.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Park, Sunghyun, Seung-won Hwang, Fuxiang Chen, Jaegul Choo, Jung-Woo Ha, Sunghun Kim y Jinyeong Yim. "Paraphrase Diversification Using Counterfactual Debiasing". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 6883–91. http://dx.doi.org/10.1609/aaai.v33i01.33016883.

Texto completo
Resumen
The problem of generating a set of diverse paraphrase sentences while (1) not compromising the original meaning of the original sentence, and (2) imposing diversity in various semantic aspects, such as a lexical or syntactic structure, is examined. Existing work on paraphrase generation has focused more on the former, and the latter was trained as a fixed style transfer, such as transferring from positive to negative sentiments, even at the cost of losing semantics. In this work, we consider style transfer as a means of imposing diversity, with a paraphrasing correctness constraint that the target sentence must remain a paraphrase of the original sentence. However, our goal is to maximize the diversity for a set of k generated paraphrases, denoted as the diversified paraphrase (DP) problem. Our key contribution is deciding the style guidance at generation towards the direction of increasing the diversity of output with respect to those generated previously. As pre-materializing training data for all style decisions is impractical, we train with biased data, but with debiasing guidance. Compared to state-of-the-art methods, our proposed model can generate more diverse and yet semantically consistent paraphrase sentences. That is, our model, trained with the MSCOCO dataset, achieves the highest embedding scores, .94/.95/.86, similar to state-of-the-art results, but with a lower mBLEU score (more diverse) by 8.73%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Zhao, Shizhen, Changxin Gao, Yuanjie Shao, Lerenhan Li, Changqian Yu, Zhong Ji y Nong Sang. "GTNet: Generative Transfer Network for Zero-Shot Object Detection". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 07 (3 de abril de 2020): 12967–74. http://dx.doi.org/10.1609/aaai.v34i07.6996.

Texto completo
Resumen
We propose a Generative Transfer Network (GTNet) for zero-shot object detection (ZSD). GTNet consists of an Object Detection Module and a Knowledge Transfer Module. The Object Detection Module can learn large-scale seen domain knowledge. The Knowledge Transfer Module leverages a feature synthesizer to generate unseen class features, which are applied to train a new classification layer for the Object Detection Module. In order to synthesize features for each unseen class with both the intra-class variance and the IoU variance, we design an IoU-Aware Generative Adversarial Network (IoUGAN) as the feature synthesizer, which can be easily integrated into GTNet. Specifically, IoUGAN consists of three unit models: Class Feature Generating Unit (CFU), Foreground Feature Generating Unit (FFU), and Background Feature Generating Unit (BFU). CFU generates unseen features with the intra-class variance conditioned on the class semantic embeddings. FFU and BFU add the IoU variance to the results of CFU, yielding class-specific foreground and background features, respectively. We evaluate our method on three public datasets and the results demonstrate that our method performs favorably against the state-of-the-art ZSD approaches.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Hasselkus, Amy, Scott S. Rubin y Marilyn Newhoff. "Effect of Generating a Semantic Prime". American Journal of Speech-Language Pathology 4, n.º 4 (noviembre de 1995): 148–51. http://dx.doi.org/10.1044/1058-0360.0404.148.

Texto completo
Resumen
Studies of both semantic priming and the generation effect (GE) have implicated spreading activation in semantic memory and have provided evidence for a semantic memory access disorder in patients with dementia. Fifteen subjects consisting of young, elderly, and demented patients participated in a semantic priming/GE task to determine whether the act of generating a semantic prime enhanced activation and reduced reaction times to related items. Reaction times were recorded for semantically related and unrelated targets presented after either read or generated word pair cues. From the results it was suggested that generating a prime provided little benefit for young subjects or subjects with dementia; elderly subjects benefited more from generating information than from reading it. Implications for theories of dementia and normal aging are discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Corrochano, Javier, Juan M. Alonso-Weber, María Paz Sesmero y Araceli Sanchis. "Lane following Learning Based on Semantic Segmentation with Chroma Key and Image Superposition". Electronics 10, n.º 24 (14 de diciembre de 2021): 3113. http://dx.doi.org/10.3390/electronics10243113.

Texto completo
Resumen
There are various techniques to approach learning in autonomous driving; however, all of them suffer from some problems. In the case of imitation learning based on artificial neural networks, the system must learn to correctly identify the elements of the environment. In some cases, it takes a lot of effort to tag the images with the proper semantics. This is also relevant given the need to have very varied scenarios to train and to thus obtain an acceptable generalization capacity. In the present work, we propose a technique for automated semantic labeling. It is based on various learning phases using image superposition combining both scenarios with chromas and real indoor scenarios. This allows the generation of augmented datasets that facilitate the learning process. Further improvements by applying noise techniques are also studied. To carry out the validation, a small-scale car model is used that learns to automatically drive on a reduced circuit. A comparison with models that do not rely on semantic segmentation is also performed. The main contribution of our proposal is the possibility of generating datasets for real indoor scenarios with automatic semantic segmentation, without the need for endless human labeling tasks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Salhi, Hammouda. "Investigating the Complementary Polysemy and the Arabic Translations of the Noun Destruction in EAPCOUNT". Terminologie et linguistique 58, n.º 1 (12 de marzo de 2014): 227–46. http://dx.doi.org/10.7202/1023818ar.

Texto completo
Resumen
This article investigates a topic at the intersection of translation studies, lexical semantics and corpus linguistics. Its general aim is to show how translation studies can benefit from both lexical semantics and corpus linguistics. The specific objective is to capture the semantic and pragmatic behavior of the noun destruction and its different translations into Arabic. The data are obtained from an English-Arabic parallel corpus made from UN texts and their translations (EAPCOUNT). The analysis of the data shows the polysemy of the word destruction as a number of semantic and pragmatic alternations can be captured. These findings are discussed in the frame of the Generative Lexicon (GL) theory developed by James Pustejovsky. The paper concludes with some concrete suggestions on how to enhance the relationship between linguists and translators and their mutual cooperation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Hua, Tianyu, Hongdong Zheng, Yalong Bai, Wei Zhang, Xiao-Ping Zhang y Tao Mei. "Exploiting Relationship for Complex-scene Image Generation". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 2 (18 de mayo de 2021): 1584–92. http://dx.doi.org/10.1609/aaai.v35i2.16250.

Texto completo
Resumen
The significant progress on Generative Adversarial Networks (GANs) has facilitated realistic single-object image generation based on language input. However, complex-scene generation (with various interactions among multiple objects) still suffers from messy layouts and object distortions, due to diverse configurations in layouts and appearances. Prior methods are mostly object-driven and ignore their inter-relations that play a significant role in complex-scene images. This work explores relationship-aware complex-scene image generation, where multiple objects are inter-related as a scene graph. With the help of relationships, we propose three major updates in the generation framework. First, reasonable spatial layouts are inferred by jointly considering the semantics and relationships among objects. Compared to standard location regression, we show relative scales and distances serve a more reliable target. Second, since the relations between objects have significantly influenced an object's appearance, we design a relation-guided generator to generate objects reflecting their relationships. Third, a novel scene graph discriminator is proposed to guarantee the consistency between the generated image and the input scene graph. Our method tends to synthesize plausible layouts and objects, respecting the interplay of multiple objects in an image. Experimental results on Visual Genome and HICO-DET datasets show that our proposed method significantly outperforms prior arts in terms of IS and FID metrics. Based on our user study and visual inspection, our method is more effective in generating logical layout and appearance for complex-scenes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Volkov, Valery V. "The term and concept of ‘media’: The aspects of hermeneutics of research". Izvestiya of Saratov University. New Series. Series: Philology. Journalism 21, n.º 1 (25 de marzo de 2021): 20–24. http://dx.doi.org/10.18500/1817-7115-2021-21-1-20-24.

Texto completo
Resumen
The article presents the results of modeling the semantic processes taking place during the hermeneutic interpretation of the Russian term and concept of ‘media’. The paper examines the synonymous pair of the univerb ‘медиа’ (‘media’) and generating word-combination ‘средства массовой информации’ (‘means of mass information’). As a result of semantic condensation, the components ‘information’ and ‘masses’ lose their original semantics, and the ‘intermediary’ component acts as the dominant one.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Zhao, Huan, Jie Cao, Mingquan Xu y Jian Lu. "Variational neural decoder for abstractive text summarization". Computer Science and Information Systems 17, n.º 2 (2020): 537–52. http://dx.doi.org/10.2298/csis200131012z.

Texto completo
Resumen
In the conventional sequence-to-sequence (seq2seq) model for abstractive summarization, the internal transformation structure of recurrent neural networks (RNNs) is completely determined. Therefore, the learned semantic information is far from enough to represent all semantic details and context dependencies, resulting in a redundant summary and poor consistency. In this paper, we propose a variational neural decoder text summarization model (VND). The model introduces a series of implicit variables by combining variational RNN and variational autoencoder, which is used to capture complex semantic representation at each step of decoding. It includes a standard RNN layer and a variational RNN layer [5]. These two network layers respectively generate a deterministic hidden state and a random hidden state. We use these two RNN layers to establish the dependence between implicit variables between adjacent time steps. In this way, the model structure can better capture the complex semantics and the strong dependence between the adjacent time steps when outputting the summary, thereby improving the performance of generating the summary. The experimental results show that, on the text summary LCSTS and English Gigaword dataset, our model has a significant improvement over the baseline model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Lee, Sujin y Incheol Kim. "Multimodal Feature Learning for Video Captioning". Mathematical Problems in Engineering 2018 (2018): 1–8. http://dx.doi.org/10.1155/2018/3125879.

Texto completo
Resumen
Video captioning refers to the task of generating a natural language sentence that explains the content of the input video clips. This study proposes a deep neural network model for effective video captioning. Apart from visual features, the proposed model learns additionally semantic features that describe the video content effectively. In our model, visual features of the input video are extracted using convolutional neural networks such as C3D and ResNet, while semantic features are obtained using recurrent neural networks such as LSTM. In addition, our model includes an attention-based caption generation network to generate the correct natural language captions based on the multimodal video feature sequences. Various experiments, conducted with the two large benchmark datasets, Microsoft Video Description (MSVD) and Microsoft Research Video-to-Text (MSR-VTT), demonstrate the performance of the proposed model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía