Academic literature on the topic 'Sentences'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sentences.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sentences"

1

Lewis, Edith P. "A Sentence of Sentences." Image: the Journal of Nursing Scholarship 18, no. 1 (March 1986): 24. http://dx.doi.org/10.1111/j.1547-5069.1986.tb00536.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yin, Wenpeng, Hinrich Schütze, Bing Xiang, and Bowen Zhou. "ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs." Transactions of the Association for Computational Linguistics 4 (December 2016): 259–72. http://dx.doi.org/10.1162/tacl_a_00097.

Full text
Abstract:
How to model a pair of sentences is a critical issue in many NLP tasks such as answer selection (AS), paraphrase identification (PI) and textual entailment (TE). Most prior work (i) deals with one individual task by fine-tuning a specific system; (ii) models each sentence’s representation separately, rarely considering the impact of the other sentence; or (iii) relies fully on manually designed, task-specific linguistic features. This work presents a general Attention Based Convolutional Neural Network (ABCNN) for modeling a pair of sentences. We make three contributions. (i) The ABCNN can be applied to a wide variety of tasks that require modeling of sentence pairs. (ii) We propose three attention schemes that integrate mutual influence between sentences into CNNs; thus, the representation of each sentence takes into consideration its counterpart. These interdependent sentence pair representations are more powerful than isolated sentence representations. (iii) ABCNNs achieve state-of-the-art performance on AS, PI and TE tasks. We release code at: https://github.com/yinwenpeng/Answer_Selection .
APA, Harvard, Vancouver, ISO, and other styles
3

Suweta, I. Made. "Balinese sentences." Linguistics and Culture Review 5, S4 (November 1, 2021): 509–21. http://dx.doi.org/10.21744/lingcure.v5ns4.1652.

Full text
Abstract:
The syntactic subsystem discusses the arrangement and arrangement of words into larger units, which are called syntactic units, namely words, phrases, clauses, sentences, and discourses. This study talks about Balinese sentences, especially about single sentences and compound sentences in Balinese. A single sentence is a sentence that has only one pattern (clause), which consists of a subject and a predicate. A compound sentence is a combination of two or more single sentences, so that the new sentence contains two or more clauses.
APA, Harvard, Vancouver, ISO, and other styles
4

Matthews, Nestor, and Folly Folivi. "Omit needless words: Sentence length perception." PLOS ONE 18, no. 2 (February 24, 2023): e0282146. http://dx.doi.org/10.1371/journal.pone.0282146.

Full text
Abstract:
Short sentences improve readability. Short sentences also promote social justice through accessibility and inclusiveness. Despite this, much remains unknown about sentence length perception—an important factor in producing readable writing. Accordingly, we conducted a psychophysical study using procedures from Signal Detection Theory to examine sentence length perception in naive adults. Participants viewed real-world full-page text samples and judged whether a bolded target sentence contained more or fewer than 17 words. The experiment yielded four findings. First, naïve adults perceived sentence length in real-world text samples quickly (median = 300–400 ms) and precisely (median = ~90% correct). Second, flipping real-world text samples upside-down generated no reaction-time cost and nearly no loss in the precision of sentence length perception. This differs from the large inversion effects that characterize other highly practiced, real-world perceptual tasks involving canonically oriented stimuli, most notably face perception and reading. Third, participants significantly underestimated the length of mirror-reversed sentences—but not upside-down, nor standard sentences. This finding parallels participants’ familiarity with commonly occurring left-justified right-ragged text, and suggests a novel demonstration of left-lateralized anchoring in scene syntax. Fourth, error patterns demonstrated that participants achieved their high speed, high precision sentence-length judgments by heuristically counting text lines, not by explicitly counting words. This suggests practical advice for writing instructors to offer students. When copy editing, students can quickly and precisely identify their long sentences via a line-counting heuristic, e.g., “a 17-word sentence spans about 1.5 text lines”. Students can subsequently improve a long sentence’s readability and inclusiveness by omitting needless words.
APA, Harvard, Vancouver, ISO, and other styles
5

MƏMMƏDOVA, G. T., and S. Ş. XANKİŞİYEVA. "MÜASİR İNGİLİS VƏ AZƏRBAYCAN DİLLƏRİNDƏ VOKATİV CÜMLƏLƏR." Actual Problems of study of humanities 1, no. 2024 (April 15, 2024): 85–89. http://dx.doi.org/10.62021/0026-0028.2024.1.085.

Full text
Abstract:
Vocative Sentences in Modern English and Azerbaijani Languages Summary Sentence-words and vocative sentences are the kinds of sentences which are mostly used both in the English and Azerbaijani languages. A vocative sentence is from Latin word “vocativus”. Such kinds of sentences express appeal, challenge and pampering. A lot of vocative words and sentences are used both in the English and Azerbaijani languages. There are some similiarities and differences between the vocative sentences and direct address, interjection and nominal sentences. This article deals with the allomorphic and izomorphic features between vocative sentences and direct address, interjections and nominal sentences using in both languages. Key words: sentence-words, a vocative sentence, a direct address, interjection, a nominal sentence, appeal
APA, Harvard, Vancouver, ISO, and other styles
6

Sher, G. Y. "Did Tarski commit “Tarski's fallacy”?" Journal of Symbolic Logic 61, no. 2 (June 1996): 653–86. http://dx.doi.org/10.2307/2275681.

Full text
Abstract:
In his 1936 paper, On the Concept of Logical Consequence, Tarski introduced the celebrated definition of logical consequence: “The sentenceσ follows logically from the sentences of the class Γ if and only if every model of the class Γ is also a model of the sentence σ.” [55, p. 417] This definition, Tarski said, is based on two very basic intuitions, “essential for the proper concept of consequence” [55, p. 415] and reflecting common linguistic usage: “Consider any class Γ of sentences and a sentence which follows from the sentences of this class. From an intuitive standpoint it can never happen that both the class Γ consists only of true sentences and the sentence σ is false. Moreover, … we are concerned here with the concept of logical, i.e., formal, consequence.” [55, p. 414] Tarski believed his definition of logical consequence captured the intuitive notion: “It seems to me that everyone who understands the content of the above definition must admit that it agrees quite well with common usage. … In particular, it can be proved, on the basis of this definition, that every consequence of true sentences must be true.” [55, p. 417] The formality of Tarskian consequences can also be proven. Tarski's definition of logical consequence had a key role in the development of the model-theoretic semantics of modern logic and has stayed at its center ever since.
APA, Harvard, Vancouver, ISO, and other styles
7

Saragih, Dhea, Tiara Indah Sari Simangunsong, Desy Natalia Simanjuntak, Roma Rezeki Nami Saragih, and Kristiani Siagian. "Analysis of Types of English Sentences in English Folklore “Jack and the Beanstalk” from American Literature Website." International Journal Corner of Educational Research 2, no. 2 (September 5, 2023): 57–63. http://dx.doi.org/10.54012/ijcer.v2i2.205.

Full text
Abstract:
The purpose of this research is to examine the types of English sentences based on function consisting of declarative sentence, interrogative sentence, exclamatory sentence, and imperative sentence found in the folklore “Jack and The Beanstalk”. The method in this research was descriptive qualitative with research source obtained from the American Literature website. The results of this research showed that in the folklore “Jack and The Beanstalk” there were four types of English sentences based on function. Type of sentence that appeared most frequently in the folklore “Jack and The Beanstalk” was declarative sentences (46 times), followed by the exclamatory sentences (13 times). Meanwhile, the types of sentences that appeared the least were interrogative sentences (2 times) and imperative sentences (2 times). Or in percentage form, as many as 73% of the sentences in the folkore “Jack and The Beanstalk” were declarative sentences, as many as 21% were exclamatory sentences, as many as 3% were an interrogative sentences, and as many as 3% were imperative sentences.
APA, Harvard, Vancouver, ISO, and other styles
8

Tsukagoshi, Hayato, Ryohei Sasano, and Koichi Takeda. "Sentence Embeddings using Definition Sentences." Journal of Natural Language Processing 30, no. 1 (2023): 125–55. http://dx.doi.org/10.5715/jnlp.30.125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Abdullah, Moch Zawaruddin, and Chastine Fatichah. "Feature-based POS tagging and sentence relevance for news multi-document summarization in Bahasa Indonesia." Bulletin of Electrical Engineering and Informatics 11, no. 1 (February 1, 2022): 541–49. http://dx.doi.org/10.11591/eei.v11i1.3275.

Full text
Abstract:
Sentence extraction in news document summarization determines representative sentences primarily by employing the news feature known as news feature score (NeFS). NeFS can achieve meaningful sentences by analyzing the frequency and similarity of phrases while neglecting grammatical information and sentence relevance to the title. The presence of instructive content is indicated by grammatical information carried by part of speech (POS). POS tagging is the process of giving a meaningful tag to each term based on qualified data and even surrounding words. Sentence relevance to the title is intended to determine the sentence's level of connectivity to the title in terms of both word-based and meaning-based similarity, primarily for news documents in Bahasa Indonesia. In this study, we present an alternative sentence weighting method by incorporating news features, POS tagging, and sentence relevance to the title. Sentence extraction based on news features, POS tagging, and sentence relevance is introduced to extract the representative sentences. The experiment results on the 11 groups of Indonesian news documents are compared with the news features scores with the grammatical information approach method (NeFGIS). The proposed method achieved better results. The increasing f-score rate of ROUGE-1, ROUGE-2, ROUGE-L, and ROUGE-SU4 sequentially are 1.84%, 3.03%, 3.85%, 2.08%.
APA, Harvard, Vancouver, ISO, and other styles
10

Azkiyyah, Hany Nur, and Yessy Purnamasari. "INVESTIGATING TYPES OF SENTENCES ON SHORT STORIES FROM THE STORY WEAVER WEBSITE." Jurnal JOEPALLT (Journal of English Pedagogy, Linguistics, Literature, and Teaching) 11, no. 2 (September 25, 2023): 207. http://dx.doi.org/10.35194/jj.v11i2.3627.

Full text
Abstract:
Most English students are still confused about the types of sentences and how to identify them. Also, most English students are not aware of the types of sentences. This study identifies what types of sentences are found in some short stories on the Story Weaver website and what type of sentence is mostly used. The qualitative method is applied, and a simple calculation is used to identify the frequency of each type of sentence. There are 12 stories with different levels based on most read by people. The data taken from the Story Weaver website and divided into four short stories with levels 1, 2, 3, and 4 based on the sorts of sentences: simple sentence, compound sentence, complex sentence, and compound-complex sentence. The data is selected by the most read people for each level. The data referring to sentences were identified and selected based on their types. The sentences were analyzed by their structure and types sentences based on the theory of Loberger and Shoup (2009). The result shows that the stories consist of 563 sentences. There are 374 of simple sentences, 35 compound sentences, 137 complex sentences, and 17 compound-complex sentences. From the result, it can be concluded that the most used type of sentence is simple sentence. The reason underlying the finding is that a simple sentence is suitable for beginners with easy words to help the reader quickly grasp the main point being made. Also, they convey a clear idea concisely and straightforwardly. In addition, simple sentences are commonly used because they are easier to read and understand, particularly for non-native speakers or those with limited language skills.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Sentences"

1

Lai, Siu-ming, and 黎少銘. "A study of compound sentences, complex sentences and sentence groups of modern Chinese language =." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B44570028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Seal, Amy. "Scoring sentences developmentally : an analog of developmental sentence scoring /." Diss., CLICK HERE for online access:, 2001. http://contentdm.lib.byu.edu/ETD/image/etd12.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Seal, Amy. "Scoring Sentences Developmentally: An Analog of Developmental Sentence Scoring." BYU ScholarsArchive, 2002. https://scholarsarchive.byu.edu/etd/1141.

Full text
Abstract:
A variety of tools have been developed to assist in the quantification and analysis of naturalistic language samples. In recent years, computer technology has been employed in language sample analysis. This study compares a new automated index, Scoring Sentences Developmentally (SSD), to two existing measures. Eighty samples from three corpora were manually analyzed using DSS and MLU and the processed by the automated software. Results show all three indices to be highly correlated, with correlations ranging from .62 to .98. The high correlations among scores support further investigation of the psychometric characteristics of the SSD software to determine its clinical validity and reliability. Results of this study suggest that SSD has the potential to compliment other analysis procedures in assessing the language development of young children.
APA, Harvard, Vancouver, ISO, and other styles
4

Robb, Simon. "Fictocritical sentences." 2001, 2001. http://web4.library.adelaide.edu.au/theses/09PH/09phr631.pdf.

Full text
Abstract:
Includes bibliographical references (leaves 166-168). CD-ROMs comprise: Appendix A. Family values: fictocritical sentences -- appendix C. Reforming the boy: fictocritical sentences Primarily enacts a fictocritical mapping of local cultural events essentially concerned with crime and trauma in Adelaide. The fictocritical treatment of these events simulates their unresolved or traumatised condition. A secondary concern is the relationship between electronic writing (hypertext) and fictocriticism.
APA, Harvard, Vancouver, ISO, and other styles
5

Thesen, Jo-Ann. "Between sentences." Thesis, Rhodes University, 2016. http://hdl.handle.net/10962/1021215.

Full text
Abstract:
My stories explore different forms, including flash fiction. Some use the fairy tale form to combine fiction and non-fiction in order to reach the essence of the story. In this I am influenced by Kate Bernheimer, who speaks of the “flatness, abstraction, intuitive logic and normalized magic” of traditional fairy tales. A number of stories are set in the places I worked as a newspaper reporter. Here I use my old press reports as starting points for the real or imagined story behind the news – often involving miscommunication, dominance, exploitation, the tension between isolation and belonging, and the nuances of family relationships.
APA, Harvard, Vancouver, ISO, and other styles
6

Souldatos, Ioannis Athanasios. "Infinitary logic, cardinals characterizable by Scott Sentences, and Independent sets of sentences." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1581476171&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bassaganyas-Bars, Toni. "Have- sentences in discourse." Doctoral thesis, Universitat Pompeu Fabra, 2018. http://hdl.handle.net/10803/462970.

Full text
Abstract:
Aquesta tesi investiga la interpretació de les frases amb el verb have en anglès. Aquest verb ha generat una gran quantitat de literatura en tots els àmbits de la lingüística, sense que s'hagi arribat a cap consens sobre com cal analitzar-lo. Dos dels motius que expliquen aquesta situació són la dificultat per determinar quin significat aporta have en tots els seus usos, i la restricció de definitud que presenta quan el seu objecte conté un nom relacional. En aquesta tesi analitzo com s'han tractat aquests dos problemes en la literatura semàntica, i proposo una nova anàlisi que qüestiona alguns supòsits de què parteix aquesta literatura: la visió transitiva dels noms relacionals, la naturalesa i l'abast de l'efecte de definitud, i una oposició simple entre sintagmes nominals indefinits i definits/quantificacionals. Així mateix, apunto una possible via per integrar en aquesta anàlisi alguns dels usos funcionals de have.
This dissertation looks into the interpretation of have-sentences in English. The verb have has given rise to a great amount of literature in all the subfields of linguistics; no consensus, however, has emerged on how it should be analyzed. Two of the reasons explaining this situation are the difficulty of determining what meaning have contributes to a sentence across its uses, and the definiteness effect it shows when its object contains a relational noun. In this thesis I analyze how these two problems have been tackled in the semantic literature, and I propose a new analysis that calls into question some of the assumptions this literature is built on: the transitive view of relational nouns, the nature and the scope of the definiteness effect, and a simple opposition between `weak' and `strong' NPs. Furthermore, I point at a possible way to integrate some of the functional uses of have into this analysis.
APA, Harvard, Vancouver, ISO, and other styles
8

Féry, Caroline, and Heiner Drenhaus. "Single prosodic phrase sentences." Universität Potsdam, 2008. http://opus.kobv.de/ubp/volltexte/2008/1937/.

Full text
Abstract:
A series of production and perception experiments investigating the prosody and well-formedness of special sentences, called Wide Focus Partial Fronting (WFPF), which consist of only one prosodic phrase and a unique initial accented argument, are reported on here. The results help us to decide between different models of German prosody. The absence of pitch height difference on the accent of the sentence speaks in favor of a relative model of prosody, in which accents are scaled relative to each other, and against models in which pitch accents are scaled in an absolute way. The results also speak for a model in which syntax, but not information structure, influences the prosodic phrasing. Finally, perception experiments show that the prosodic structure of sentences with a marked word order needs to be presented for grammaticality judgments. Presentation of written material only is not enough, and falsifies the results.
APA, Harvard, Vancouver, ISO, and other styles
9

Swampillai, Kumutha. "Information extraction across sentences." Thesis, University of Sheffield, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.575468.

Full text
Abstract:
Most relation extraction systems identify relations by searching within- sentences (within-sentence relations). Such an approach excludes finding any relations that cross sentence boundaries (cross-sentence relations). This thesis quantifies the cross-sentence relations in two major information ex- traction corpora: ACE03 (9.4%) and MUC6 (27.4%), revealing the extent of this limitation. In response. a composite kernel approach to cross-sentence relation extraction is proposed which models relations using parse tree and fiat surface features. Support vector machine classifiers are trained using cross-sentential relations from the !vIUC6 corpus to determine the effective- ness of this approach. It was shown .that composite kernels are able to extract cross-sentential relations with f-measure scores of 0.512, 0.116 and 0.633 for PerOrg. PerPost and PostOrg models. respectively. Moreover. combining within-sentence and cross-sentence extraction models increases the number of relations correctly identified by 24% over within-sentence relation extraction alone.
APA, Harvard, Vancouver, ISO, and other styles
10

Bousbouras, Spiros. "Finite spectra of sentences." Thesis, University of Leeds, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.435789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Sentences"

1

Hartman, Charles O. Sentences. Los Angeles: Sun & Moon Press, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pascale, Derron, ed. Sentences. Paris: Les Belles lettres, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Great Britain. Department for Education and Skills., ed. Sentences. [London]: Department for Education and Skills, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Porphyre. Sentences. Paris: Vrin, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Trussell, Philip. Sentences. [Austin, TX]: Cuneiform Press, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Unite, Jeannette. Sentences. Cape Town: Bell-Roberts Contemporary, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Flamerie de Lachapelle, Guillaume, 1978-, ed. Sentences. Paris: Les Belles Lettres, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mackie, Benita. Building sentences. Englewood Cliffs, N.J: Prentice-Hall, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lippman, Laura. Life sentences. New York: William Morrow, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Adolphe, Le Vaillant Barthélemy. Sentences: Poèmes. Longueuil, Brossard, Québec: Humanitas, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Sentences"

1

Burton-Roberts, Noel. "Sentences within sentences." In Analysing Sentences, 166–89. 5th ed. London: Routledge, 2021. http://dx.doi.org/10.4324/9781003118916-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nair, P. K. Ramachandran, and Vimala D. Nair. "Sentences." In Scientific Writing and Communication in Agriculture and Natural Resources, 67–85. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-03101-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Day, Adrian. "Sentences." In The Structure of Scientific Examination Questions, 45–69. Dordrecht: Springer Netherlands, 2013. http://dx.doi.org/10.1007/978-94-007-7488-9_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Graham, Robert. "Sentences." In How To Write Fiction (And Think About It), 180–89. London: Macmillan Education UK, 2007. http://dx.doi.org/10.1007/978-0-230-20789-9_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ballard, Kim. "Sentences." In The Frameworks of English, 144–80. London: Macmillan Education UK, 2013. http://dx.doi.org/10.1007/978-1-137-06833-0_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Freeborn, Dennis. "Sentences." In A Course Book in English Grammar, 280–93. London: Macmillan Education UK, 1995. http://dx.doi.org/10.1007/978-1-349-24079-1_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ngo, Binh. "Sentences." In Vietnamese, 152–228. Abingdon, Oxon ; New York, NY : Routledge, 2020. | Series: Routledge essential grammars: Routledge, 2020. http://dx.doi.org/10.4324/9781315454610-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jianming, Lu. "Sentences." In Singapore Mandarin Grammar I, 76–93. London: Routledge, 2022. http://dx.doi.org/10.4324/b23129-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lavender, Susan, and Stavroula Varella. "Sentences." In Grammar in Literature, 57–63. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98893-7_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tan, Zhongchao. "Sentences." In Academic Writing for Engineering Publications, 105–24. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-99364-1_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Sentences"

1

Wang, Zhiguo, Wael Hamza, and Radu Florian. "Bilateral Multi-Perspective Matching for Natural Language Sentences." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/579.

Full text
Abstract:
Natural language sentence matching is a fundamental technology for a variety of tasks. Previous approaches either match sentences from a single direction or only apply single granular (word-by-word or sentence-by-sentence) matching. In this work, we propose a bilateral multi-perspective matching (BiMPM) model. Given two sentences P and Q, our model first encodes them with a BiLSTM encoder. Next, we match the two encoded sentences in two directions P against Q and P against Q. In each matching direction, each time step of one sentence is matched against all time-steps of the other sentence from multiple perspectives. Then, another BiLSTM layer is utilized to aggregate the matching results into a fix-length matching vector. Finally, based on the matching vector, a decision is made through a fully connected layer. We evaluate our model on three tasks: paraphrase identification, natural language inference and answer sentence selection. Experimental results on standard benchmark datasets show that our model achieves the state-of-the-art performance on all tasks.
APA, Harvard, Vancouver, ISO, and other styles
2

Yin, Yongjing, Linfeng Song, Jinsong Su, Jiali Zeng, Chulun Zhou, and Jiebo Luo. "Graph-based Neural Sentence Ordering." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/748.

Full text
Abstract:
Sentence ordering is to restore the original paragraph from a set of sentences. It involves capturing global dependencies among sentences regardless of their input order. In this paper, we propose a novel and flexible graph-based neural sentence ordering model, which adopts graph recurrent network \citep{Zhang:acl18} to accurately learn semantic representations of the sentences. Instead of assuming connections between all pairs of input sentences, we use entities that are shared among multiple sentences to make more expressive graph representations with less noise. Experimental results show that our proposed model outperforms the existing state-of-the-art systems on several benchmark datasets, demonstrating the effectiveness of our model. We also conduct a thorough analysis on how entities help the performance. Our code is available at https://github.com/DeepLearnXMU/NSEG.git.
APA, Harvard, Vancouver, ISO, and other styles
3

Tsukagoshi, Hayato, Ryohei Sasano, and Koichi Takeda. "DefSent: Sentence Embeddings using Definition Sentences." In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.acl-short.52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tan, Jiwei, Xiaojun Wan, and Jianguo Xiao. "From Neural Sentence Summarization to Headline Generation: A Coarse-to-Fine Approach." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/574.

Full text
Abstract:
Headline generation is a task of abstractive text summarization, and previously suffers from the immaturity of natural language generation techniques. Recent success of neural sentence summarization models shows the capacity of generating informative, fluent headlines conditioned on selected recapitulative sentences. In this paper, we investigate the extension of sentence summarization models to the document headline generation task. The challenge is that extending the sentence summarization model to consider more document information will mostly confuse the model and hurt the performance. In this paper, we propose a coarse-to-fine approach, which first identifies the important sentences of a document using document summarization techniques, and then exploits a multi-sentence summarization model with hierarchical attention to leverage the important sentences for headline generation. Experimental results on a large real dataset demonstrate the proposed approach significantly improves the performance of neural sentence summarization models on the headline generation task.
APA, Harvard, Vancouver, ISO, and other styles
5

Mao, Yuzhao, Chang Zhou, Xiaojie Wang, and Ruifan Li. "Show and Tell More: Topic-Oriented Multi-Sentence Image Captioning." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/592.

Full text
Abstract:
Image captioning aims to generate textual descriptions for images. Most previous work generates a single-sentence description for each image. However, a picture is worth a thousand words. Single-sentence can hardly give a complete view of an image even by humans. In this paper, we propose a novel Topic-Oriented Multi-Sentence (\emph{TOMS}) captioning model, which can generate multiple topic-oriented sentences to describe an image. Different from object instances or attributes, topics mined by the latent Dirichlet allocation reflect hidden thematic structures in reference sentences of an image. In our model, each topic is integrated to a caption generator with a Fusion Gate Unit (FGU) to guide the generation of a sentence towards a certain topic perspective. With multiple sentences from different topics, our \emph{TOMS} provides a complete description of an image. Experimental results on both sentence and paragraph datasets demonstrate the effectiveness of our \emph{TOMS} in terms of topical consistency and descriptive completeness.
APA, Harvard, Vancouver, ISO, and other styles
6

Xie, Zhongbin, and Shuai Ma. "Dual-View Variational Autoencoders for Semi-Supervised Text Matching." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/737.

Full text
Abstract:
Semantically matching two text sequences (usually two sentences) is a fundamental problem in NLP. Most previous methods either encode each of the two sentences into a vector representation (sentence-level embedding) or leverage word-level interaction features between the two sentences. In this study, we propose to take the sentence-level embedding features and the word-level interaction features as two distinct views of a sentence pair, and unify them with a framework of Variational Autoencoders such that the sentence pair is matched in a semi-supervised manner. The proposed model is referred to as Dual-View Variational AutoEncoder (DV-VAE), where the optimization of the variational lower bound can be interpreted as an implicit Co-Training mechanism for two matching models over distinct views. Experiments on SNLI, Quora and a Community Question Answering dataset demonstrate the superiority of our DV-VAE over several strong semi-supervised and supervised text matching models.
APA, Harvard, Vancouver, ISO, and other styles
7

Zheng, Zaixiang, Xiang Yue, Shujian Huang, Jiajun Chen, and Alexandra Birch. "Towards Making the Most of Context in Neural Machine Translation." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/551.

Full text
Abstract:
Document-level machine translation manages to outperform sentence level models by a small margin, but have failed to be widely adopted. We argue that previous research did not make a clear use of the global context, and propose a new document-level NMT framework that deliberately models the local context of each sentence with the awareness of the global context of the document in both source and target languages. We specifically design the model to be able to deal with documents containing any number of sentences, including single sentences. This unified approach allows our model to be trained elegantly on standard datasets without needing to train on sentence and document level data separately. Experimental results demonstrate that our model outperforms Transformer baselines and previous document-level NMT models with substantial margins of up to 2.1 BLEU on state-of-the-art baselines. We also provide analyses which show the benefit of context far beyond the neighboring two or three sentences, which previous studies have typically incorporated.
APA, Harvard, Vancouver, ISO, and other styles
8

Tan, Chuanqi, Furu Wei, Wenhui Wang, Weifeng Lv, and Ming Zhou. "Multiway Attention Networks for Modeling Sentence Pairs." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/613.

Full text
Abstract:
Modeling sentence pairs plays the vital role for judging the relationship between two sentences, such as paraphrase identification, natural language inference, and answer sentence selection. Previous work achieves very promising results using neural networks with attention mechanism. In this paper, we propose the multiway attention networks which employ multiple attention functions to match sentence pairs under the matching-aggregation framework. Specifically, we design four attention functions to match words in corresponding sentences. Then, we aggregate the matching information from each function, and combine the information from all functions to obtain the final representation. Experimental results demonstrate that the proposed multiway attention networks improve the result on the Quora Question Pairs, SNLI, MultiNLI, and answer sentence selection task on the SQuAD dataset.
APA, Harvard, Vancouver, ISO, and other styles
9

Tomita, Masaru. ""Linguistic" sentences and "real" sentences." In the 12th conference. Morristown, NJ, USA: Association for Computational Linguistics, 1988. http://dx.doi.org/10.3115/991719.991732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Shaonan, Jiajun Zhang, and Chengqing Zong. "Learning Sentence Representation with Guidance of Human Attention." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/578.

Full text
Abstract:
Recently, much progress has been made in learning general-purpose sentence representations that can be used across domains. However, most of the existing models typically treat each word in a sentence equally. In contrast, extensive studies have proven that human read sentences efficiently by making a sequence of fixation and saccades. This motivates us to improve sentence representations by assigning different weights to the vectors of the component words, which can be treated as an attention mechanism on single sentences. To that end, we propose two novel attention models, in which the attention weights are derived using significant predictors of human reading time, i.e., Surprisal, POS tags and CCG supertags. The extensive experiments demonstrate that the proposed methods significantly improve upon the state-of-the-art sentence representation models.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Sentences"

1

Ramos, Octavio Jr. Words, Sentences, and Ideas. Office of Scientific and Technical Information (OSTI), December 2019. http://dx.doi.org/10.2172/1581261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jones, Cat, and Clare Lally. Prison population growth: drivers, implications and policy considerations. Parliamentary Office of Science and Technology, January 2024. http://dx.doi.org/10.58248/pb58.

Full text
Abstract:
England and Wales have the highest per capita prison population in Western Europe. In October 2023, over 88,000 people were imprisoned, in an estate with a maximum capacity of 88,890. This was the highest number recorded. 94% of people in prison are adult men and the adult male prison estate is almost full. The prison estate is operating at 99% of its usable operational capacity and over 60% of prisons are overcrowded. Drivers of the current prison population growth include changes in sentencing policy (including increased sentence lengths). Other factors include remand, recall, reoffending and policing. The number of people given immediate custodial sentences has fallen from 98,044 in 2012, to 67,812 in 2022. This suggests that the prison population increase is not driven by more convictions. Nearing capacity can have negative implications for the safe operation of prisons, and for the health, wellbeing and rehabilitation of people in prison. Government action to avoid exceeding capacity includes expanding the prison estate and releasing some prisoners up to 18 days early. As of December 2023, three relevant bills are progressing through Parliament: the Sentencing Bill 2023, the Criminal Justice Bill 2023, and the Victims and Prisoners Bill 2023. Each contains a range of measures, with some likely to reduce the prison population and others likely to increase it. Various stakeholders have proposed additional policy options, such as the greater use of non-custodial sentences, and interventions to reduce the remand and recall populations. Some experts in this field have highlighted the role of public opinion in relation to sentencing policy and the relationship between prisons and the wider justice system. Evidence suggests that the public generally overestimate crime rates and underestimate sentence lengths, and that better-informed members of the public are less likely to view sentences as lenient. More high-quality research is needed to better understand the drivers of increased sentence length and to evaluate health and rehabilitation programmes in the prison context.
APA, Harvard, Vancouver, ISO, and other styles
3

Bond, Z. S., Thomas J. Moore, and Kate McCreight. Acoustic Characteristics of Sentences Produced in Noise. Fort Belvoir, VA: Defense Technical Information Center, September 1989. http://dx.doi.org/10.21236/ada235344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Polinsky, A. Mitchell, and Steven Shavell. Deterrence and the Adjustment of Sentences During Imprisonment. Cambridge, MA: National Bureau of Economic Research, July 2019. http://dx.doi.org/10.3386/w26083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gates, Allison, Michelle Gates, Shannon Sim, Sarah A. Elliott, Jennifer Pillay, and Lisa Hartling. Creating Efficiencies in the Extraction of Data From Randomized Trials: A Prospective Evaluation of a Machine Learning and Text Mining Tool. Agency for Healthcare Research and Quality (AHRQ), August 2021. http://dx.doi.org/10.23970/ahrqepcmethodscreatingefficiencies.

Full text
Abstract:
Background. Machine learning tools that semi-automate data extraction may create efficiencies in systematic review production. We prospectively evaluated an online machine learning and text mining tool’s ability to (a) automatically extract data elements from randomized trials, and (b) save time compared with manual extraction and verification. Methods. For 75 randomized trials published in 2017, we manually extracted and verified data for 21 unique data elements. We uploaded the randomized trials to ExaCT, an online machine learning and text mining tool, and quantified performance by evaluating the tool’s ability to identify the reporting of data elements (reported or not reported), and the relevance of the extracted sentences, fragments, and overall solutions. For each randomized trial, we measured the time to complete manual extraction and verification, and to review and amend the data extracted by ExaCT (simulating semi-automated data extraction). We summarized the relevance of the extractions for each data element using counts and proportions, and calculated the median and interquartile range (IQR) across data elements. We calculated the median (IQR) time for manual and semiautomated data extraction, and overall time savings. Results. The tool identified the reporting (reported or not reported) of data elements with median (IQR) 91 percent (75% to 99%) accuracy. Performance was perfect for four data elements: eligibility criteria, enrolment end date, control arm, and primary outcome(s). Among the top five sentences for each data element at least one sentence was relevant in a median (IQR) 88 percent (83% to 99%) of cases. Performance was perfect for four data elements: funding number, registration number, enrolment start date, and route of administration. Among a median (IQR) 90 percent (86% to 96%) of relevant sentences, pertinent fragments had been highlighted by the system; exact matches were unreliable (median (IQR) 52 percent [32% to 73%]). A median 48 percent of solutions were fully correct, but performance varied greatly across data elements (IQR 21% to 71%). Using ExaCT to assist the first reviewer resulted in a modest time savings compared with manual extraction by a single reviewer (17.9 vs. 21.6 hours total extraction time across 75 randomized trials). Conclusions. Using ExaCT to assist with data extraction resulted in modest gains in efficiency compared with manual extraction. The tool was reliable for identifying the reporting of most data elements. The tool’s ability to identify at least one relevant sentence and highlight pertinent fragments was generally good, but changes to sentence selection and/or highlighting were often required.
APA, Harvard, Vancouver, ISO, and other styles
6

Brown, Peter F., Stephen A. Della Pietra, Vincent J. Della Pietra, Robert L. Mercer, and Surya Mohanty. Dividing and Conquering Long Sentences in a Translation System. Fort Belvoir, VA: Defense Technical Information Center, January 1992. http://dx.doi.org/10.21236/ada460274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Surendranath, Anup. Indian court fighting against a tide of death sentences. Edited by Reece Hooker. Monash University, August 2022. http://dx.doi.org/10.54377/37ad-0804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mack, M. A., and B. Gold. The Intelligibility of Non-Vocoded and Vocoded Semantically Anomalous Sentences. Fort Belvoir, VA: Defense Technical Information Center, July 1985. http://dx.doi.org/10.21236/ada160401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mutebi, Natasha. The use of short prison sentences in England and Wales. Parliamentary Office of Science and Technology, UK Parliament, July 2023. http://dx.doi.org/10.58248/pb52.

Full text
Abstract:
This POSTbrief summarises the most recent and relevant evidence on the use and effectiveness of short prison sentences. It outlines the key trends, alternative sentencing options and wider considerations.
APA, Harvard, Vancouver, ISO, and other styles
10

Baker, James, Janet Baker, Paul Bamberg, Kathleen Bishop, Larry Gillick, Vera Helman, Zezhen Huang, et al. Large Vocabulary Recognition of Wall Street Journal Sentences at Dragon Systems. Fort Belvoir, VA: Defense Technical Information Center, January 1992. http://dx.doi.org/10.21236/ada460288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography