Academic literature on the topic 'Short text understanding'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Short text understanding.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Short text understanding"

1

D.Umanandhini*1, &. S.Manimegalai2. "FUZZY SCORE BASED SHORT TEXT UNDERSTANDING FROM CORPUS DATA USING SEMANTIC DISCOVERY." INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY 6, no. 12 (2017): 268–73. https://doi.org/10.5281/zenodo.1116682.

Full text
Abstract:
Short text understanding and short text are always more ambiguous. These short texts are produced including Search queries, Tags, Keywords, Conversation or Social posts and containing limited context. Generally short texts do not contain sufficient collection of data to support many state-of-the-art approaches for text mining such as topic modelling. It presents a comprehensive overview of short text understanding. Here we used a novel framework are Text Feature Extraction Algorithm and Fuzzy weighted Vote algorithm First, Text classification based on semantic feature extraction.   Its goal is that use semantic feature extraction to improve the performance of classifier. And second, Fuzzy weighted Vote algorithm is the combination of Fuzzy logic and weighted vote algorithm, which means it generates the fuzzy score and then based on this score the weight is calculated during shortening the text. In experimental results, the novel Feature Extraction and voter has higher safety performance than the previous classification algorithms. This proposed criterion can provide almost accurate safety and also a good range of accessibility. We have proved that in problems where the weighted voting distinguish some alternatives and finds the best alternative. Reduced Computation time comparing to other previous process and schemes.  
APA, Harvard, Vancouver, ISO, and other styles
2

Ivgi, Maor, Uri Shaham, and Jonathan Berant. "Efficient Long-Text Understanding with Short-Text Models." Transactions of the Association for Computational Linguistics 11 (2023): 284–99. http://dx.doi.org/10.1162/tacl_a_00547.

Full text
Abstract:
Abstract Transformer-based pretrained language models (LMs) are ubiquitous across natural language understanding, but cannot be applied to long sequences such as stories, scientific articles, and long documents due to their quadratic complexity. While a myriad of efficient transformer variants have been proposed, they are typically based on custom implementations that require expensive pretraining from scratch. In this work, we propose SLED: SLiding-Encoder and Decoder, a simple approach for processing long sequences that re-uses and leverages battle-tested short-text pretrained LMs. Specifically, we partition the input into overlapping chunks, encode each with a short-text LM encoder and use the pretrained decoder to fuse information across chunks (fusion-in-decoder). We illustrate through controlled experiments that SLED offers a viable strategy for long text understanding and evaluate our approach on SCROLLS, a benchmark with seven datasets across a wide range of language understanding tasks. We find that SLED is competitive with specialized models that are up to 50x larger and require a dedicated and expensive pretraining step.
APA, Harvard, Vancouver, ISO, and other styles
3

M. Katekar, Aparna, and Antara Bhattacharya. "A Survey on Short Text Understanding." International Journal of Engineering Trends and Technology 42, no. 6 (2016): 291–92. http://dx.doi.org/10.14445/22315381/ijett-v42p253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Jun, Guimin Huang, Jianheng Chen, and Yabing Wang. "Short Text Understanding Combining Text Conceptualization and Transformer Embedding." IEEE Access 7 (2019): 122183–91. http://dx.doi.org/10.1109/access.2019.2938303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shobana, J., S. Amutha, and M. Murali. "Understanding Short Text Through Lexical Semantic Analysis." IOP Conference Series: Materials Science and Engineering 1130, no. 1 (2021): 012038. http://dx.doi.org/10.1088/1757-899x/1130/1/012038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sun, Yaru, Ying Yang, and Dawei Yang. "Informed Graph Convolution Networks for Multilingual Short Text Understanding." Procedia Computer Science 207 (2022): 90–99. http://dx.doi.org/10.1016/j.procs.2022.09.041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Banegas, Darío Luis. "Understanding a reader's attraction to a literary short text." Colombian Applied Linguistics Journal 16, no. 1 (2014): 105. http://dx.doi.org/10.14483/udistrital.jour.calj.2014.1.a09.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ji, Lei, Yujing Wang, Botian Shi, Dawei Zhang, Zhongyuan Wang, and Jun Yan. "Microsoft Concept Graph: Mining Semantic Concepts for Short Text Understanding." Data Intelligence 1, no. 3 (2019): 238–70. http://dx.doi.org/10.1162/dint_a_00013.

Full text
Abstract:
Knowlege is important for text-related applications. In this paper, we introduce Microsoft Concept Graph, a knowledge graph engine that provides concept tagging APIs to facilitate the understanding of human languages. Microsoft Concept Graph is built upon Probase, a universal probabilistic taxonomy consisting of instances and concepts mined from the Web. We start by introducing the construction of the knowledge graph through iterative semantic extraction and taxonomy construction procedures, which extract 2.7 million concepts from 1.68 billion Web pages. We then use conceptualization models to represent text in the concept space to empower text-related applications, such as topic search, query recommendation, Web table understanding and Ads relevance. Since the release in 2016, Microsoft Concept Graph has received more than 100,000 pageviews, 2 million API calls and 3,000 registered downloads from 50,000 visitors over 64 countries.
APA, Harvard, Vancouver, ISO, and other styles
9

Valikova, Olga A., and Alena S. Demchenko. "Translingual Literary Text: on Problem of Understanding." Polylinguality and Transcultural Practices 17, no. 3 (2020): 352–62. http://dx.doi.org/10.22363/2618-897x-2020-17-3-352-362.

Full text
Abstract:
The given study covers an actual interdisciplinary issue - Russian language, post-Soviet Russian literature in particular, that includes the otherness of multiple ethnic cultures and creates unique images of the world. In the modern conventional sense, culture is replaced by transculture - a space of interaction and mutual repulsion, intertwinement, constellation, overlapping, flowing of cultures into one another. These processes have no and cant have any solidified, final forms that would be determined once and for all. Therefore, the works created in the aesthetics of transculturation are always unique, be it a literary text, a musical message or a silent arthouse short film speaking the language of negative space. We believe that a transcultural episteme should be used in the process of new thinking formation. A person without any developed pragmatic presupposition is deprived of explanatory knowledge and becomes a victim of the information manipulation embedding a model of confrontational perception of the Other into the collective consciousness. By the given work, we would like to demonstrate a method of working with higher-school students that we called Immersion Reading. Using the works by Russian Germans (in particular, E. Seifert and G. Belger), we describe the stages of readers immersion into a literary work step by step (context verticalization, hermeneutic comment creation, stages of typification and differentiation of texts within a chosen paradigm, synthesis) and then bring up the results of our work with students and doctoral candidates of Russia and Kazakhstan for discussion within Literature and Globalization, Intercultural Communication in Art Dimension lecture courses (author of the courses - Bakhtikireeva, U.M.).
APA, Harvard, Vancouver, ISO, and other styles
10

Abdalgader, Khaled, Atheer A. Matroud, and Ghaleb Al-Doboni. "Temporal Dynamics in Short Text Classification: Enhancing Semantic Understanding Through Time-Aware Model." Information 16, no. 3 (2025): 214. https://doi.org/10.3390/info16030214.

Full text
Abstract:
Traditional text classification models predominantly rely on static text representations, failing to capture temporal variations in language usage and evolving semantic meanings. This limitation reduces their ability to accurately classify time-sensitive texts, where understanding context, detecting trends, and addressing semantic shifts over time are critical. This paper introduces a novel time-aware short text classification model incorporating temporal information, enabling tracking of and adaptation to evolving language semantics. The proposed model enhances contextual understanding by leveraging timestamps and significantly improves classification accuracy, particularly for time-sensitive applications such as News topic classification. The model employs a hybrid architecture combining Convolutional Neural Networks (CNNs) and Bidirectional Long Short-Term Memory (BiLSTM) networks, enriched with attention mechanisms to capture both local and global dependencies. To further refine semantic representation and mitigate the effects of semantic drift, the model fine-tunes GloVe embeddings and employs synonym-based data augmentation. The proposed approach is evaluated on three benchmark dynamic datasets, achieving superior performance with classification accuracy reaching 92% for the first two datasets and 85% for the third dataset. Furthermore, the model is applied to a different-fields categorization and trend analysis task, demonstrating its capability to capture temporal patterns and perform detailed trend analysis of domain-agnostic textual content. These results underscore the potential of the proposed framework to provide deeper insights into the evolving nature of language and its impact on short-text classification. This work advances natural language processing by offering a comprehensive time-aware classification framework, addressing the challenges of temporal dynamics in language semantics.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Short text understanding"

1

Andersson, Anna. "Understanding the 'mess' in text messages : An analysis of humorous text message exchanges shared in social media platforms." Thesis, Karlstads universitet, Institutionen för språk, litteratur och interkultur, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-40203.

Full text
Abstract:
The concept 'mess-understanding' has circulated in online media and is so prevalent that it is now included in the Urban Dictionary. The folk concept of mess-understandings is a pun for misunderstandings arising in an online media context. Posting one's own or others' miscommunication and/or typographical errors has grown to be a popular way of sharing humor via cross-platform sharing on the Internet. The aim of this paper is to analyze short message service (SMS) dialogues shared in social media, with a special emphasis on those with the highest degree of 'shareability' and/or popularity. The study specifically focuses on understanding linguistic and communicative reasons behind these dialogues being treated as humorous by users. As such, the study aims to shed light upon current cultural conceptions of communication and humor. Data was collected from the photo sharing website Pinterest from users who had posted or reposted 'screen shots' from their own or others' SMS conversations. In order to collect as much valuable data as possible, a manual search strategy was developed with three different word strings which resulted in a corpus of 160 dialogues. Content analysis of the data revealed certain recurrent humor themes, such as allusions to sexual conduct or bodily functions, generation gaps, technology difficulties, and lexical ambiguity.<br>Begreppet ‘mess-förstånd’ har på senare tid cirkulerat på Internet och är nu så allmänt förekommande att det är inkluderat i Urban Dictionary. Mess-förstånd är en ordvits för missförstånd som förekommer på Internet. Att lägga upp sina egna eller andras misslyckade konversationer och/eller typografiska fel har utvecklats till ett populärt sätt att dela humor via olika plattformar på Internet. Syftet med den här uppsatsen är att analysera vilka lingvistiska mönster av SMS-missförstånd som finns, med speciell betoning på sådana som har den högsta graden av ‘delbarhet’ och/eller popularitet och vad det säger oss om rådande föreställningar om kommunikation och humor. Det empiriska materialet insamlades från fotodelningshemsidan Pinterest från användare som hade lagt upp eller återbrukat ‘skärmavbilder’ från sina egna eller andras SMS-konversationer. För att samla in så mycket värdefull data som möjligt användes en manuell sökstrategi med tre olika ordsträngar som resulterade i en korpus med 160 dialoger. Analyser av dialogerna visade på återkommande humorteman, exempelvis anspelningar på sexualitet eller kroppsliga funktioner, generationsklyftor, teknologiska svårigheter och lexikala tvetydligheter.
APA, Harvard, Vancouver, ISO, and other styles
2

Cunningham-Nelson, Samuel Kayne. "Enhancing student conceptual understanding and learning experience through automated textual analysis." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/134145/1/Samuel_Cunningham-Nelson_Thesis.pdf.

Full text
Abstract:
Supporting students to develop a strong foundation for thorough understanding, and assisting educators in teaching effectively, both require the utilization of meaningful feedback. The contributions presented in this thesis aimed to provide instantaneous, and individualised feedback for both students and educators through the use of text analysis. The methodologies and models described are all automated, therefore once implemented can provide feedback routinely and recurrently. These solutions facilitate both learning and teaching for students and educators, respectively, helping to close the quality assurance loop.
APA, Harvard, Vancouver, ISO, and other styles
3

Reboud, Alison. "Towards automatic understanding of narrative audiovisual content." Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS398.pdf.

Full text
Abstract:
Aujourd'hui les histoires se disent en ligne et souvent par le biais de vidéos. Comprendre ces histoires reste un défi pour les systèmes automatiques. Avec la multimodalité comme thème transversal, cette thèse décompose la tâche de "compréhension" en traitant les défis suivants: prédire le degré de mémorabilité, résumer et modéliser la narration des contenus audiovisuels<br>Modern storytelling is digital and video-based. Understanding the stories contained in videos remains a challenge for automatic systems. Having multimodality as a transversal theme, this research thesis breaks down the "understanding" task into the following challenges: Predicting memorability, summarising and modelling stories from audiovisual content
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Short text understanding"

1

van den Dool, Huug. Empirical Methods in Short-Term Climate Prediction. Oxford University Press, 2006. http://dx.doi.org/10.1093/oso/9780199202782.001.0001.

Full text
Abstract:
This clear and accessible text describes the methods underlying short-term climate prediction at time scales of 2 weeks to a year. Although a difficult range to forecast accurately, there have been several important advances in the last ten years, most notably in understanding ocean-atmosphere interaction (El Nino for example), the release of global coverage data sets, and in prediction methods themselves. With an emphasis on the empirical approach, the text covers in detail empirical wave propagation, teleconnections, empirical orthogonal functions, and constructed analogue. It also provides a detailed description of nearly all methods used operationally in long-lead seasonal forecasts, with new examples and illustrations. The challenges of making a real time forecast are discussed, including protocol, format, and perceptions about users. Based where possible on global data sets, illustrations are not limited to the Northern Hemisphere, but include several examples from the Southern Hemisphere.
APA, Harvard, Vancouver, ISO, and other styles
2

Rex, Richard. A Short History of the Tudors. Bloomsbury Publishing Plc, 2023. http://dx.doi.org/10.5040/9781350414389.

Full text
Abstract:
Combining an expertise on the Tudor dynasty with an authoritative understanding of its religious and political make-up, A Short History of the Tudors provides a fresh and accessible perspective of one of the most formative periods of British history. Rex considers the ways in which the Tudors shaped the beginnings of modern England through the momentous break with Rome in a comprehensive yet balanced way. Close attention is also paid to the dismantling of the baronial system and centralisation of secular power, as well as an exploration of the break with Rome, the two pillars on which the author’s argument will rest. The book is organised chronologically and divided up into time periods, making it the ultimate companion for anyone keen to delve into the history of Britain’s most notorious dynasty. The famous and infamous key players in the Tudor age have long endured in text books and are, brought to life here by Rex. Lively portraits of John Fisher, Thomas Moore and Thomas Wolsey and Mary Queen of Scots are painted, as well as the lesser-known players like the flamboyant Robert Devereux. A leading authority on the Tudors and British religious history, Richard Rex brings to life a dynasty which continues to engages and fascinate readers.
APA, Harvard, Vancouver, ISO, and other styles
3

Ballamingie, Patricia. Showing Theory to Know Theory: Understanding social science concepts through illustrative vignettes. Showing Theory Press, 2022. http://dx.doi.org/10.22215/stkt.

Full text
Abstract:
This collaborative, open educational resource brings together a collection of short pedagogical texts that help new learners understand complex theoretical concepts and disciplinary jargon from the critical social sciences. Each entry "shows" an element of theory using an "illustrative vignette”—a short, evocative story, visual or infographic, poem, described photograph, or other audio-visual material. Of use across disciplines and community contexts, Showing Theory aims to democratize theory while linking it to practical, grounded experience.
APA, Harvard, Vancouver, ISO, and other styles
4

Yarbrough, Robert W. The Letters to Timothy and Titus. Wm. B. Eerdmans Publishing Co., 2018. http://dx.doi.org/10.5040/bci-0010.

Full text
Abstract:
The Pastoral Letters—1 Timothy, 2 Timothy, and Titus—have made an enduring contribution to understanding the role of pastors in the church. With a spirited devotion to the text, Robert Yarbrough helps unlock the meaning of these short but rich letters in this commentary. In keeping with the character of Pillar New Testament Commentary volumes, The Letters to Timothy and Titus offers a straightforward reading of these texts. Their primary concerns—God, salvation, and the pastoral task—remain central to Yarbrough’s thorough and comprehensive exegesis. Engaging with the best scholarship and resources, Yarbrough shows how these letters are as relevant today as they were to the early Christians.
APA, Harvard, Vancouver, ISO, and other styles
5

Bodenhamer, David J. The U.S. Constitution: A Very Short Introduction. Oxford University Press, 2018. http://dx.doi.org/10.1093/actrade/9780195378320.001.0001.

Full text
Abstract:
The U.S. Constitution: A Very Short Introduction explores the major themes of American constitutional history—federalism, the balance of powers, property, representation, equality, rights, and security. Informed by the latest scholarship, each theme illustrates how the Constitution has served as a dynamic framework for legitimating power and advancing liberty. Today, we face serious challenges to the nation’s constitutional legacy. Endless wars, a sharply divided electorate and deadlocked government, economic inequality, immigration, cybersecurity and privacy, and foreign interference in the nation’s democratic processes have placed demands on government and on society that test our constitutional values. Understanding how the Constitution has evolved will help us adapt its principles to the challenges of our age.
APA, Harvard, Vancouver, ISO, and other styles
6

Lim, Timothy H. The Dead Sea Scrolls: A Very Short Introduction. Oxford University Press, 2017. http://dx.doi.org/10.1093/actrade/9780198779520.001.0001.

Full text
Abstract:
The Dead Sea Scrolls: A Very Short introduction discusses the cultural significance of the discovery of the Dead Sea Scrolls and the religious, political, and legal controversies during the seventy years of study since they were found. It looks at the contribution the scrolls have made to our understanding of the Old Testament or Hebrew Bible, and the origins of early Christianity. Exploring the most recent scholarly discussions on the archaeology of Khirbet Qumran, and the study of the biblical texts, the canon, and the history of the Second Temple Period, it considers what the scrolls reveal about the communities closely associated with the scrolls and sectarianism in early Judaism.
APA, Harvard, Vancouver, ISO, and other styles
7

Tiwari, Sandip. Semiconductor Physics. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198759867.001.0001.

Full text
Abstract:
A graduate-level text, Semiconductor physics: Principles, theory and nanoscale covers the central topics of the field, together with advanced topics related to the nanoscale and to quantum confinement, and integrates the understanding of important attributes that go beyond the conventional solid-state and statistical expositions. Topics include the behavior of electrons, phonons and photons; the energy and entropic foundations; bandstructures and their calculation; the behavior at surfaces and interfaces, including those of heterostructures and their heterojunctions; deep and shallow point perturbations; scattering and transport, including mesoscale behavior, using the evolution and dynamics of classical and quantum ensembles from a probabilistic viewpoint; energy transformations; light-matter interactions; the role of causality; the connections between the quantum and the macroscale that lead to linear responses and Onsager relationships; fluctuations and their connections to dissipation, noise and other attributes; stress and strain effects in semiconductors; properties of high permittivity dielectrics; and remote interaction processes. The final chapter discusses the special consequences of the principles to the variety of properties (consequences of selection rules, for example) under quantum-confined conditions and in monolayer semiconductor systems. The text also bring together short appendices discussing transform theorems integral to this study, the nature of random processes, oscillator strength, A and B coefficients and other topics important for understanding semiconductor behavior. The text brings the study of semiconductor physics to the same level as that of the advanced texts of solid state by focusing exclusively on the equilibrium and off-equilibrium behaviors important in semiconductors.
APA, Harvard, Vancouver, ISO, and other styles
8

Bruun, Christer. Roman Government and Administration. Edited by Christer Bruun and Jonathan Edmondson. Oxford University Press, 2015. http://dx.doi.org/10.1093/oxfordhb/9780195336467.013.014.

Full text
Abstract:
This chapter outlines how critical inscriptions are for our understanding of the functioning of the administrative structures of Roman government. The author discusses the best methodology for using epigraphic texts to reconstruct Roman administration, showing how even short texts can provide critical pieces of evidence, especially during the imperial period. Knowing how to use arguments from silence is shown to be a crucial element in the modern study of Roman government .
APA, Harvard, Vancouver, ISO, and other styles
9

Ward, Graeme. History, Scripture, and Authority in the Carolingian Empire. British Academy, 2022. http://dx.doi.org/10.5871/bacad/9780197267288.001.0001.

Full text
Abstract:
This book offers a detailed analysis of the work of the ninth-century historian Frechulf of Lisieux. Completed c. 830, Frechulf’s Histories comprise a vast account of the world from its creation through to the seventh century. Despite the richness of the source, it has long been overlooked by modern scholars. Two factors account for this neglect: Frechulf’s narrative stops over two centuries short of his time of writing, and was largely a compilation of earlier, late antique histories and chronicles. It is, however, the lack of ostensibly ‘contemporary’ or ‘original’ material that makes the text so typical, not only of Carolingian historiography but also of ninth-century theological literature more broadly. In examining Frechulf's historiographical compendium, this book challenges a dominant paradigm within medieval studies of understanding history-writing primarily as an extension of politics and power. By focusing instead on the transmission and reception of patristic knowledge, the compilation of authoritative texts, and the relationship between the study of history and scriptural exegesis, it reveals Frechulf's Histories to be an unexpectedly rich artefact of Carolingian intellectual culture.
APA, Harvard, Vancouver, ISO, and other styles
10

Erickson, Amy. Jonah. Wm. B. Eerdmans Publishing Co., 2021. http://dx.doi.org/10.5040/bci-0090.

Full text
Abstract:
The dominant reading of the book of Jonah—that the hapless prophet Jonah is a lesson in not trying to run away from God—oversimplifies a profound biblical text, argues Amy Erickson. Likewise, the more recent understanding of Jonah as satire is problematic in its own right, laden as it is with anti-Jewish undertones and the superimposition of a Christian worldview onto a Jewish text. How can we move away from these stale interpretations to recover the richness of meaning that belongs to this short but noteworthy book of the Bible? This Illuminations commentary delves into Jonah’s reception history in Christian, Jewish, and Islamic contexts while also exploring its representations in visual arts, music, literature, and pop culture. After this thorough contextualization, Erickson provides a fresh translation and exegesis, paving the way for pastors and scholars to read and utilize the book of Jonah as the provocative, richly allusive, and theologically robust text that it is.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Short text understanding"

1

Miall, David. "Text and Affect: A Model of Story Understanding." In Re-reading the Short Story. Palgrave Macmillan UK, 1989. http://dx.doi.org/10.1007/978-1-349-10313-3_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shi, Qiuyan, Yongli Wang, Jianhong Sun, and Anmin Fu. "Short Text Understanding Based on Conceptual and Semantic Enrichment." In Advanced Data Mining and Applications. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-05090-0_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hu, Fei, Xiaofei Xu, Jingyuan Wang, Zhanbo Yang, and Li Li. "Memory-Enhanced Latent Semantic Model: Short Text Understanding for Sentiment Analysis." In Database Systems for Advanced Applications. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-55753-3_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Haixun. "Understanding Short Texts." In Web Technologies and Applications. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37401-2_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Schmucker, Robin, Meng Xia, Amos Azaria, and Tom Mitchell. "Ruffle &Riley: Insights from Designing and Evaluating a Large Language Model-Based Conversational Tutoring System." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-64302-6_6.

Full text
Abstract:
AbstractConversational tutoring systems (CTSs) offer learning experiences through interactions based on natural language. They are recognized for promoting cognitive engagement and improving learning outcomes, especially in reasoning tasks. Nonetheless, the cost associated with authoring CTS content is a major obstacle to widespread adoption and to research on effective instructional design. In this paper, we discuss and evaluate a novel type of CTS that leverages recent advances in large language models (LLMs) in two ways: First, the system enables AI-assisted content authoring by inducing an easily editable tutoring script automatically from a lesson text. Second, the system automates the script orchestration in a learning-by-teaching format via two LLM-based agents (Ruffle &amp;Riley) acting as a student and a professor. The system allows for free-form conversations that follow the ITS-typical inner and outer loop structure. We evaluate Ruffle &amp;Riley’s ability to support biology lessons in two between-subject online user studies ($$N = 200$$ N = 200 ) comparing the system to simpler QA chatbots and reading activity. Analyzing system usage patterns, pre/post-test scores and user experience surveys, we find that Ruffle &amp;Riley users report high levels of engagement, understanding and perceive the offered support as helpful. Even though Ruffle &amp;Riley users require more time to complete the activity, we did not find significant differences in short-term learning gains over the reading activity. Our system architecture and user study provide various insights for designers of future CTSs. We further open-source our system to support ongoing research on effective instructional design of LLM-based learning technologies.
APA, Harvard, Vancouver, ISO, and other styles
6

Kononenko, Irina, and Serge Sharoff. "Understanding short texts with integration of knowledge representation methods." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-62064-8_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mazloum, Sherine F. "Reconsidering Cultural Identity in Zeinab Alkordy’s "Zahrat al-Janūb"." In Voices from Nubia. punctum books, 2024. http://dx.doi.org/10.53288/0476.1.05.

Full text
Abstract:
Research on Nubia has repeatedly referred to the issue of identity, especially the tendency of some Nubian writers to essentialize Nubian identity. Recent interest in minority studies contests postmodernist rejection of identity politics but warns against the reductionism of the essentialists who emphasize an intrinsic value to ethnic identity. Hence, the need to provide alternatives to the conflicting definitions of identity provided by both the post-modernists and the essentialists alike, which in turn resulted in the rise of a post-positivist realist theory of identity, which recognizes that identities are socially significant and context specific ideological constructs that have referential relationships to the world. This theory reconsiders the links between identity and experience offering a more theoretically productive position away from the two extremes proposed by the aforementioned approaches. This study analyzes Zeinab Alkordy’s collection of short stories entitled Zahrat Al-Janūb (Flower of the south, 1988) from the perspective of a post-positivist realist theory of identity with special reference to Satya Mohanty’s epistemic status of cultural identity. This theory enables a reading of the categories of ethnicity, sex, and socio-economic status in Alkordy’s text while taking into consideration how her characters adapt to changing circumstances. Examining Alkordy’s text from the perspective of Satya Mohanty’s “epistemic status” of cultural identity reveals that Alkordy focuses on her Nubian characters’ lives in the “real” world stressing the relations among “personal experiences, social meanings and cultural identities” (Mohanty 2000). Informed by Mohanty’s argument that personal experience is constructed but still yields knowledge, this chapter foregrounds how Zeinab Alkordy weaves the epistemic status of Nubian cultural identities, showing the closely interwoven mediation of the personal and the political/public. Indeed, Alkordy’s characters are all individuals who struggle with the challenges they face in their everyday lives. In a post-positivist context, the oppressed articulate their raced, gendered, classed, and nationalized experiences, mediating their identities as both constructed and real in a way that communicates epistemic privilege. Hence, a revised reading of Nubian experiences as depicted in Alkordy’s text allows an understanding of Nubian cultural identities beyond ahistorocial essentialism or radical skepticism.
APA, Harvard, Vancouver, ISO, and other styles
8

Schoenmaker, Dirk, and Willem Schramade. "Reporting and Investor Relations." In Springer Texts in Business and Economics. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-35009-2_17.

Full text
Abstract:
AbstractFinancial reporting and investor relations serve important roles as a means of communication between corporate management and the company’s stakeholders, including investors. This chapter outlines why reporting matters, and how it falls short. It also shows how integrated reporting (combining financial, social and environmental value) might be an improvement. Integrated reporting is about understanding how an organisation creates integrated value and how its activities affect the capitals (human, social and natural capitals, next to financial capital) it relies upon for this. Emerging international sustainability reporting standards will spur integrated reporting. Ultimately, integrated reporting is related to integrated thinking, which takes into account the connectivity and interdependencies between the financial, social, human and environmental capitals that affect an organisation’s ability to create integrated value over time. But investors are slow to ask questions about this new information, as their main focus is still on the financials.
APA, Harvard, Vancouver, ISO, and other styles
9

Stenner, A. Jackson, and Malbert Smith. "Testing Construct Theories." In Explanatory Models, Unit Standards, and Personalized Learning in Educational Measurement. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-3747-7_3.

Full text
Abstract:
AbstractThis paper presents and illustrates a novel methodology, construct-specification equations, for examining the construct validity of a psychological instrument. Whereas traditional approaches have focused on the study of between-person variation on the construct, the suggested methodology emphasizes study of the relationships between item characteristics and item scores. The major thesis of the construct-specification-equation approach is that until developers of a psychological instrument understand what item characteristics are determining the item difficulties, the understanding of what is being measured is unsatisfyingly primitive. This method is illustrated with data from the Knox Cube Test which purports to be a measure of visual attention and short-term memory.
APA, Harvard, Vancouver, ISO, and other styles
10

Stenner, A. Jackson, Mark Stone, and Donald Burdick. "How to Model and Test for the Mechanisms That Make Measurement Systems Tick." In Explanatory Models, Unit Standards, and Personalized Learning in Educational Measurement. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-3747-7_15.

Full text
Abstract:
AbstractOne must provide information about the conditions under which [the measurement outcome] would change or be different. It follows that the generalizations that figure in explanations [of measurement outcomes] must be change-relating… Both explainers [e.g., person parameters and item parameters] and what is explained [measurement outcomes] must be capable of change, and such changes must be connected in the right way (Woodward, 2003). Rasch’s unidimensional models for measurement tell us how to connect object measures, instrument calibrations, and measurement outcomes. Substantive theory tells us what interventions or changes to the instrument must offset a change to the measure for an object of measurement to hold the measurement outcome constant. Integrating a Rasch model with a substantive theory dictates the form and substance of permissible conjoint interventions. Rasch analysis absent construct theory and an associated specification equation is a black box in which understanding may be more illusory than not. The mere availability of numbers to analyze and statistics to report is often accepted as methodologically satisfactory in the social sciences, but falls far short of what is needed for a science.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Short text understanding"

1

Gao, Yanfang, and Fuxiang Gao. "Research on Short Text Speech Understanding Methods for Factory Applications." In 2024 4th International Conference on Electronic Information Engineering and Computer Science (EIECS). IEEE, 2024. https://doi.org/10.1109/eiecs63941.2024.10800630.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wretblad, Niklas, Fredrik Riseby, Rahul Biswas, Amin Ahmadi, and Oskar Holmström. "Understanding the Effects of Noise in Text-to-SQL: An Examination of the BIRD-Bench Benchmark." In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.acl-short.34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kamaya, Masayuki, Shunji Sakai, Nobuo Totsuka, and Nobuo Nakajima. "Estimation of Short Crack Growth Rate on PWSCC of Mill Annealed Alloy 600." In CORROSION 2000. NACE International, 2000. https://doi.org/10.5006/c2000-00213.

Full text
Abstract:
Abstract Understanding the short crack behavior is important to construct lifetime prediction models for light water reactor components. There are, however, limitted methods of directly measuring short crack behaviors. Therefore we need some interpolation or extrapolation technique to precisely evaluate the short crack growth rate. In this study, constant load tests were conducted to investigate the primary water stress corrosion cracking (PWSCC) short crack growth rate on for different mill annealed alloy 600 samples at 350°C. Maximum crack length was measured for each sample and then divided by relevant test duration to evaluate its crack growth rate. On the other hand, to determine more accurate crack growth rates, a crack growth simulation model which allowed for mechanical effects of grain boundaries on a crack kinked at the grain boundary triple point was developed. With this simulation model, the crack growth rate, when a crack grows along grain boundaries and affected by these, was evaluated.
APA, Harvard, Vancouver, ISO, and other styles
4

Brooks, Johnathon, Miriam Barber, and Haiping Lu. "Kinetic Turbidity Test Method for Scale Inhibitor Evaluation on Multifunctional Scales." In CORROSION 2021. AMPP, 2021. https://doi.org/10.5006/c2021-16959.

Full text
Abstract:
Abstract One of the critical approaches for scale control is the proper selection and use of scale inhibitors. Laboratory tests help to select the appropriate scale inhibitor with the most common testing methods, including static bottle test and dynamic scaling loop test. Recently, Kinetic Turbidity Test (KTT) has gained increased recognition as a new testing method for scale inhibitor evaluation due to short testing time, simple sample preparation, and good reproducibility. There has been a good deal of research and studies on KTT as a technique for multifunctional scales, including calcium carbonate and barium sulfate (especially low scaling brines), halite and silicates. In addition, KTT can examine dispersant effects of polymers and surfactants on scale and other solids and can work under anaerobic conditions to give mechanistic understanding in the presence of iron. This paper discusses an alternative and efficient scale inhibitor testing method and gives insight for scale treatment chemistry and dosage by comparing and contrasting the different evaluation methods for scale inhibitors.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhai, Ziqing, Mychailo Toloczko, and Stephen Bruemmer. "Crack Initiation for Alloy 600 and Alloy 690 During Constant Load Tests in Simulated Pressurized Water Reactor Primary Water." In CORROSION 2019. NACE International, 2019. https://doi.org/10.5006/c2019-13383.

Full text
Abstract:
Abstract Understanding stress corrosion crack (SCC) initiation of UNS N06600 (Alloy 600) and UNS N06690 (Alloy 690) is of critical importance for material degradation prediction and plant life management for existing pressurized water reactors. To address this issue, long-term SCC initiation testing has been conducted on both cold-worked (CW) Alloy 600 and Alloy 690 in 360°C simulated PWR primary water with crack initiation determined in-situ by direct current potential drop. SCC initiation testing has been completed on 7 heats of non-CW and CW Alloy 600 (33 specimens) in the mill-annealed (MA) or solution-annealed (SA) condition with ongoing tests on 7 heats of CW Alloy 690 (59 specimens) in the MA, SA or thermally-treated condition. Post-test examinations revealed that SCC initiation in Alloy 600 takes place through the growth and coalescence of short cracks from stress-assisted intergranular attack, while grain boundary creep cavities act as the key precursor to crack initiation in certain highly CW alloy 690 materials. Data to date shows that Alloy 600 SCC initiation susceptibility increased sharply with cold work level and applied stress for all CW heats exhibiting initiation times from 100-2000 hours. By comparison, highly CW Alloy 690 materials continue to show resistance to practical crack initiation as specimens reach exposure times of ~22,000 hours.
APA, Harvard, Vancouver, ISO, and other styles
6

Ashida, Yugo. "Phenomenal Pitting Corrosion on Coating Damaged Surface of Automotive Suspension Coil Springs." In CORROSION 2016. NACE International, 2016. https://doi.org/10.5006/c2016-07287.

Full text
Abstract:
Abstract This study focuses on a better understanding of significant pitting corrosion on coating surface damaged carbon steels, or low alloy steels, during salt spray testing for automotive applications. Anodic cyclic polarization test was used to evaluate the severity of pitting corrosion, and to estimate the corrosion rate of raw materials. Corrosion potential Ecorr, pitting potential Epit, and pitting protection potential Epp were measured by conducting anodic cyclic potentiodynamic polarization (CPP) on 3 carbon steels. The scattering of the three potentials were observed within 35 mV on a 1 cm2 exposure surface of the 3 steels. To simulate the surface damage on automobile suspension coil springs and stabilizer bars, gravel shocking and a hardness indenter were used to generate damage on ZnP pretreated and coated shot peening surfaces. The corrosion potential evolution of steel substrate, ZnP pretreated surface, and damaged coating surface were monitored in 5% NaCl solution for 72 hours. The timing of pretreatment exposure and metal substrate exposure to the NaCl solution were noticeable. A pitting model and a correlation between pitting occurrence and fatigue cracking are further discussed.
APA, Harvard, Vancouver, ISO, and other styles
7

Hua, Wen, Zhongyuan Wang, Haixun Wang, Kai Zheng, and Xiaofang Zhou. "Short text understanding through lexical-semantic analysis." In 2015 IEEE 31st International Conference on Data Engineering (ICDE). IEEE, 2015. http://dx.doi.org/10.1109/icde.2015.7113309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Shansong, Weiming Lu, Dezhi Yang, Liang Yao, and Baogang Wei. "Short Text Understanding by Leveraging Knowledge into Topic Model." In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, 2015. http://dx.doi.org/10.3115/v1/n15-1131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

He, Yayun, Zuheng Kang, Jianzong Wang, Junqing Peng, and Jing Xiao. "Voiceextender: Short-Utterance Text-Independent Speaker Verification With Guided Diffusion Model." In 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEE, 2023. http://dx.doi.org/10.1109/asru57964.2023.10389784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Nandy, Abhilash, Yash Kulkarni, Pawan Goyal, and Niloy Ganguly. "Order-Based Pre-training Strategies for Procedural Text Understanding." In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers). Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.naacl-short.74.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Short text understanding"

1

Hammad, Ali, and Mohamed Moustafa. Seismic Behavior of Special Concentric Braced Frames under Short- and Long-Duration Ground Motions. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, 2019. http://dx.doi.org/10.55461/zont9308.

Full text
Abstract:
Over the past decade, several long-duration subduction earthquakes took place in different locations around the world, e.g., Chile in 2010, Japan in 2011, China in 2008, and Indonesia in 2004. Recent research has revealed that long-duration, large-magnitude earthquakes may occur along the Cascadia subduction zone of the Pacific Northwest Coast of the U.S. The duration of an earthquake often affects the response of structures. Current seismic design specifications mostly use response spectra to identify the hazard and do not consider duration effects. Thus, a comprehensive understanding of the effect of the duration of the ground motion on structural performance and its design implications is an important issue. The goal of this study was to investigate how the duration of an earthquake affects the structural response of special concentric braced frames (SCBFs). A comprehensive experimental program and detailed analytical investigations were conducted to understand and quantify the effect of duration on collapse capacity of SCBFs, with the goal of improving seismic design provisions by incorporating these effects. The experimental program included large-scale shake table tests, and the analytical program consisted of pre-test and post-test phases. The pre-test analysis phase performed a sensitivity analysis that used OpenSees models preliminarily calibrated against previous experimental results for different configuration of SCBFs. A tornado-diagram framework was used to rank the influence of the different modeling parameters, e.g., low-cycle fatigue, on the seismic response of SCBFs under short- and long-duration ground motions. Based on the results obtained from the experimental program, these models were revisited for further calibration and validation in the post-test analysis. The experimental program included three large-scale shake-table tests of identical single-story single-bay SCBF with a chevron-brace configuration tested under different ground motions. Two specimens were tested under a set of spectrally-matched short and long-duration ground motions. The third specimen was tested under another long-duration ground motion. All tests started with a 100% scale of the selected ground motions; testing continued with an ever-increasing ground-motion scale until failure occurred, e.g., until both braces ruptured. The shake table tests showed that the duration of the earthquake may lead to premature seismic failure or lower capacities, supporting the initiative to consider duration effects as part of the seismic design provisions. Identical frames failed at different displacements demands because of the damage accumulation associated with the earthquake duration, with about 40% reduction in the displacement capacity of the two specimens tested under long-duration earthquakes versus the short-duration one. Post-test analysis focused first on calibrating an OpenSees model to capture the experimental behavior of the test specimens. The calibration started by matching the initial stiffness and overall global response. Next, the low-cycle fatigue parameters were fine-tuned to properly capture the experimental local behavior, i.e., brace buckling and rupture. The post-test analysis showed that the input for the low-cycle fatigue models currently available in the literature does not reflect the observed experimental results. New values for the fatigue parameters are suggested herein based on the results of the three shake-table tests. The calibrated model was then used to conduct incremental dynamic analysis (IDA) using 44 pairs of spectrally-matched short- and long-duration ground motions. To compare the effect of the duration of ground motion, this analysis aimed at incorporating ground-motion variability for more generalized observations and developing collapse fragility curves using different intensity measures (IMs). The difference in the median fragility was found to be 45% in the drift capacity at failure and about 10% in the spectral acceleration (Sa). Using regression analysis, the obtained drift capacity from analysis was found to be reduced by about 8% on average for every additional 10 sec in the duration of the ground motion. The last stage of this study extended the calibrated model to SCBF archetype buildings to study the effect of the duration of ground motion on full-sized structures. Two buildings were studied: a three-story and nine-story build that resembled the original SAC buildings but were modified with SCBFs as lateral support system instead of moment resisting frames. Two planer frames were adopted from the two buildings and used for the analysis. The same 44 spectrally-matched pairs previously used in post-test analysis were used to conduct nonlinear time history analysis and study the effect of duration. All the ground motions were scaled to two hazard levels for the deterministic time history analysis: 10% exceedance in 50 years and 2% exceedance in 50 years. All analysis results were interpreted in a comparative way to isolate the effect of duration, which was the main variable in the ground-motion pairs. In general, the results showed that the analyzed SCBFs experienced higher drift values under the long-duration suite of ground motions, and, in turn, a larger percentage of fractured braces under long-duration cases. The archetype SCBFs analysis provided similar conclusions on duration effects as the experimental and numerical results on the single-story single-bay frame.
APA, Harvard, Vancouver, ISO, and other styles
2

Vano, Julie, Tanya Petach, Jeffrey Deems, et al. A Collaborative, In Situ Mountain Hydrology NASA Test Bed. Aspen Global Change Institute, 2024. http://dx.doi.org/10.69925/vcbq9771.

Full text
Abstract:
Beginning primarily as snowmelt from the Rocky Mountains, the Colorado River supplies water to over 40 million people in seven U.S. states and Mexico. As demand for water grows and climate-driven drought threatens supply, there is an urgent need to advance decision-relevant hydrologic research in this region, which serves as an example for similarly positioned mountain headwaters around the world. Within this report we share the design for a collaborative process for testing innovative approaches to doing research—a test bed for short—that leverages existing research efforts and articulates strategies for accelerating the science resource managers are seeking to address this need. We designed this test bed by 1) engaging researchers and those who forecast, operate, and manage resources and 2) by employing collaborative science expertise and network analysis. Our activities involved investigations into areas of untapped potential (including 15 events on a listening tour), the research landscape, and the user needs landscape, which we drew upon to design our proposed test bed. This test bed is built from a suite of recommendations (listed below) based on those explorations. The proposed test bed supports an approach to conducting mountain hydrology research that complements NASA science goals and that is centered on collaborations and strategic monitoring, modeling, and data science enhanced by local partners to:  Accelerate understanding of mountain water cycles and improve forecasts in a rapidly changing world;  Use long-term monitoring to calibrate, validate, complement, and enhance satellite data and land surface models; and  Cultivate learning and community building among scientists, within and across institutions, and in collaboration with research users. In general, we focus on systematic ways to build on what already exists (vs. creating something entirely new). Through our work in designing the test bed, we utilize network analysis, user needs synthesis, and collaboration management (bringing people together in ways that support collaborative science)—tools that will also help to further refine and sustain the effort. This report develops a suite of broadly applicable recommendations for future work (summarized below), as well as action items more specific to the NASA Terrestrial Hydrology program.
APA, Harvard, Vancouver, ISO, and other styles
3

Golovko, Khrystyna. TRAVEL REPORT BY ALEKSANDER JANTA-POŁCZYNSKI «INTO THE USSR» (1932): FROG PERSPECTIVE. Ivan Franko National University of Lviv, 2021. http://dx.doi.org/10.30970/vjo.2021.50.11091.

Full text
Abstract:
The article analyzes a series of materials by Aleksander Janta-Polczynski «Into the USSR» from Soviet Russia during the in 1932, published on «Wiadomości Literackiе». The purpose of this article is explain the uniqueness of the reporter’s style and personality. We want to emphasize the role of Janta-Polczynski as the pioneer of reportage journalism. He was the first who worked professionally in this position in the full sense of this word. Analyzed the cycle of Alexander Janta-Polczynski from Russia, we can emphasize the scale of the reporter’s trip: in 1932 the journalist made the largest journalistic trip to the USSR. Janta visited the Eastern republics, which differed from the popular Moscow and Leningrad. Also, he saw the largest construction in the USSR at this time – which it bragged about russian newspapers – Magnitogorsk and Dneprostroy. For a better understanding are given the visual examples from reportorial texts. It should be noted that for Janta the main task of the reporter is to show what is seen and recorded: only facts and personal experience in communication. This cycle can safely be called a journey and social expedition. The main task for Janta the scene where the reportage takes place is to find proper characters and convince them of the importance of their story. These are the materials of a reporter – an eyewitness, not a researcher, a report from the scene, which pushes the reader to an independent conclusion. We explore that all the Janta-Polczynski texts are inextricably linked by looking into the «middle» of the process: the diversity of what is seen allows the journalist to look for differences and similarities, compare, look at the fundamental components, track changes and distinguish them. Special attention was paid to a low-angle shot in his materials. He describes how Soviet society lives, how factories work, how the system of educating a Soviet person, goes to the movies and exhibitions, communicates with ordinary citizens. Undoubtedly, all this is successfully complemented by the factual detail and uniqueness of the author’s style.
APA, Harvard, Vancouver, ISO, and other styles
4

Ohad, Nir, and Robert Fischer. Regulation of Fertilization-Independent Endosperm Development by Polycomb Proteins. United States Department of Agriculture, 2004. http://dx.doi.org/10.32747/2004.7695869.bard.

Full text
Abstract:
Arabidopsis mutants that we have isolated, encode for fertilization-independent endosperm (fie), fertilization-independent seed2 (fis2) and medea (mea) genes, act in the female gametophyte and allow endosperm to develop without fertilization when mutated. We cloned the FIE and MEA genes and showed that they encode WD and SET domain polycomb (Pc G) proteins, respectively. Homologous proteins of FIE and MEA in other organisms are known to regulate gene transcription by modulating chromatin structure. Based on our results, we proposed a model whereby both FIE and MEA interact to suppress transcription of regulatory genes. These genes are transcribed only at proper developmental stages, as in the central cell of the female gametophyte after fertilization, thus activating endosperm development. To test our model, the following questions were addressed: What is the Composition and Function of the Polycomb Complex? Molecular, biochemical, genetic and genomic approaches were offered to identify members of the complex, analyze their interactions, and understand their function. What is the Temporal and Spatial Pattern of Polycomb Proteins Accumulation? The use of transgenic plants expressing tagged FIE and MEA polypeptides as well as specific antibodies were proposed to localize the endogenous polycomb complex. How is Polycomb Protein Activity Controlled? To understand the molecular mechanism controlling the accumulation of FIE protein, transgenic plants as well as molecular approaches were proposed to determine whether FIE is regulated at the translational or posttranslational levels. The objectives of our research program have been accomplished and the results obtained exceeded our expectation. Our results reveal that fie and mea mutations cause parent-of-origin effects on seed development by distinct mechanisms (Publication 1). Moreover our data show that FIE has additional functions besides controlling the development of the female gametophyte. Using transgenic lines in which FIE was not expressed or the protein level was reduced during different developmental stages enabled us for the first time to explore FIE function during sporophyte development (Publication 2 and 3). Our results are consistent with the hypothesis that FIE, a single copy gene in the Arabidopsis genome, represses multiple developmental pathways (i.e., endosperm, embryogenesis, shot formation and flowering). Furthermore, we identified FIE target genes, including key transcription factors known to promote flowering (AG and LFY) as well as shoot and leaf formation (KNAT1) (Publication 2 and 3), thus demonstrating that in plants, as in mammals and insects, PcG proteins control expression of homeobox genes. Using the Yeast two hybrid system and pull-down assays we demonstrated that FIE protein interact with MEA via the N-terminal region (Publication 1). Moreover, CURLY LEAF protein, an additional member of the SET domain family interacts with FIE as well. The overlapping expression patterns of FIE, with ether MEA or CLF and their common mutant phenotypes, demonstrate the versatility of FIE function. FIE association with different SET domain polycomb proteins, results in differential regulation of gene expression throughout the plant life cycle (Publication 3). In vitro interaction assays we have recently performed demonstrated that FIE interacts with the cell cycle regulatory component Retinobalsoma protein (pRb) (Publication 4). These results illuminate the potential mechanism by which FIE may restrain embryo sac central cell division, at least partly, through interaction with, and suppression of pRb-regulated genes. The results of this program generated new information about the initiation of reproductive development and expanded our understanding of how PcG proteins regulate developmental programs along the plant life cycle. The tools and information obtained in this program will lead to novel strategies which will allow to mange crop plants and to increase crop production.
APA, Harvard, Vancouver, ISO, and other styles
5

Shpigel, Nahum, Raul Barletta, Ilan Rosenshine, and Marcelo Chaffer. Identification and characterization of Mycobacterium paratuberculosis virulence genes expressed in vivo by negative selection. United States Department of Agriculture, 2004. http://dx.doi.org/10.32747/2004.7696510.bard.

Full text
Abstract:
Mycobacterium avium subsp. paratuberculosis (MAP) is the etiological agent of a severe inflammatory bowel disease (IBD) in ruminants, known as Johne’s disease or paratuberculosis. Johne’s disease is considered to be one of the most serious diseases affecting dairy cattle both in Israel and worldwide. Heavy economic losses are incurred by dairy farmers due to the severe effect of subclinical infection on milk production, fertility, lower disease resistance and early culling. Its influence in the United States alone is staggering, causing an estimated loss of $1.5 billion to the agriculture industry every year. Isolation of MAP from intestinal tissue and blood of Crohn's patients has lead to concern that it plays a potential pathogenic role in promoting human IDB including Crohn’s disease. There is great concern following the identification of the organism in animal products and shedding of the organism to the environment by subclinically infected animals. Little is known about the molecular basis for MAP virulence. The goal of the original proposed research was to identify MAP genes that are required for the critical stage of initial infection and colonization of ruminants’ intestine by MAP. We proposed to develop and use signature tag mutagenesis (STM) screen to find MAP genes that are specifically required for survival in ruminants upon experimental infection. This research projected was approved as one-year feasibility study to prove the ability of the research team to establish the animal model for mutant screening and alternative in-vitro cell systems. In Israel, neonatal goat kids were repeatedly inoculated with either one of the following organisms; MAP K-10 strain and three transposon mutants of K-10 which were produced and screened by the US PI. Six months after the commencement of inoculation we have necropsied the goats and taken multiple tissue samples from the jejunum, ileum and mesenteric lymph nodes. Both PCR and histopathology analysis indicated on efficient MAP colonization of all the inoculated animals. We have established several systems in the Israeli PI’s laboratory; these include using IS900 PCR for the identification of MAP and using HSP65-based PCR for the differentiation between MAV and MAP. We used Southern blot analysis for the differentiation among transposon mutants of K-10. In addition the Israeli PI has set up a panel of in-vitro screening systems for MAP mutants. These include assays to test adhesion, phagocytosis and survival of MAP to/within macrophages, assays that determine the rate of MAPinduced apoptosis of macrophages and MAP-induced NO production by macrophages, and assays testing the interference with T cell ã Interferon production and T cell proliferation by MAP infected macrophages (macrophage studies were done in BoMac and RAW cell lines, mouse peritoneal macrophages and bovine peripheral blood monocytes derived macrophages, respectively). All partners involved in this project feel that we are currently on track with this novel, highly challenging and ambitious research project. We have managed to establish the above described research systems that will clearly enable us to achieve the original proposed scientific objectives. We have proven ourselves as excellent collaborative groups with very high levels of complementary expertise. The Israeli groups were very fortunate to work with the US group and in a very short time period to master numerous techniques in the field of Mycobacterium research. The Israeli group has proven its ability to run this complicated animal model. This research, if continued, may elucidate new and basic aspects related to the pathogenesis MAP. In addition the work may identify new targets for vaccine and drug development. Considering the possibility that MAP might be a cause of human Crohn’s disease, better understanding of virulence mechanisms of this organism might also be of public health interest as well.
APA, Harvard, Vancouver, ISO, and other styles
6

Lunn, Pete, Marek Bohacek, Jason Somerville, Áine Ní Choisdealbha, and Féidhlim McGowan. PRICE Lab: An Investigation of Consumers’ Capabilities with Complex Products. ESRI, 2016. https://doi.org/10.26504/bkmnext306.

Full text
Abstract:
Executive Summary This report describes a series of experiments carried out by PRICE Lab, a research programme at the Economic and Social Research Institute (ESRI) jointly funded by the Central Bank of Ireland, the Commission for Energy Regulation, the Competition and Consumer Protection Commission and the Commission for Communications Regulation. The experiments were conducted with samples of Irish consumers aged 18-70 years and were designed to answer the following general research question: At what point do products become too complex for consumers to choose accurately between the good ones and the bad ones? BACKGROUND AND METHODS PRICE Lab represents a departure from traditional methods employed for economic research in Ireland. It belongs to the rapidly expanding area of ‘behavioural economics’, which is the application of psychological insights to economic analysis. In recent years, behavioural economics has developed novel methods and generated many new findings, especially in relation to the choices made by consumers. These scientific advances have implications both for economics and for policy. They suggest that consumers often do not make decisions in the way that economists have traditionally assumed. The findings show that consumers have limited capacity for attending to and processing information and that they are prone to systematic biases, all of which may lead to disadvantageous choices. In short, consumers may make costly mistakes. Research has indeed documented that in several key consumer markets, including financial services, utilities and telecommunications, many consumers struggle to choose the best products for themselves. It is often argued that these markets involve ‘complex’ products. The obvious question that arises is whether consumer policy can be used to help them to make better choices when faced with complex products. Policies are more likely to be successful where they are informed by an accurate understanding of how real consumers make decisions between products. To provide evidence for consumer policy, PRICE Lab has developed a method for measuring the accuracy with which consumers make choices, using techniques adapted from the scientific study of human perception. The method allows researchers to measure how reliably consumers can distinguish a good deal from a bad one. A good deal is defined here as one where the product is more valuable than the price paid. In other words, it offers good value for money or, in the jargon of economics, offers the consumer a ‘surplus’. Conversely, a bad deal offers poor value for money, providing no (or a negative) surplus. PRICE Lab’s main experimental method, which we call the ‘Surplus Identification’ (S-ID) task, allows researchers to measure how accurately consumers can spot a surplus and whether they are prone to systematic biases. Most importantly, the S-ID task can be used to study how the accuracy of consumers’ decisions changes as the type of product changes. For the experiments we report here, samples of consumers arrived at the ESRI one at a time and spent approximately one hour doing the S-ID task with different kinds of products, which were displayed on a computer screen. They had to learn to judge the value of one or more products against prices and were then tested for accuracy. As well as people’s intrinsic motivation to do well when their performance on a task like this is tested, we provided an incentive: one in every ten consumers who attended PRICE Lab won a prize, based on their performance. Across a series of these experiments, we were able to test how the accuracy of consumers’ decisions was affected by the number and nature of the product’s characteristics, or ‘attributes’, which they had to take into account in order to distinguish good deals from bad ones. In other words, we were able to study what exactly makes for a ‘complex’ product, in the sense that consumers find it difficult to choose good deals. FINDINGS Overall, across all ten experiments described in this report, we found that consumers’ judgements of the value of products against prices were surprisingly inaccurate. Even when the product was simple, meaning that it consisted of just one clearly perceptible attribute (e.g. the product was worth more when it was larger), consumers required a surplus of around 16-26 per cent of the total price range in order to be able to judge accurately that a deal was a good one rather than a bad one. Put another way, when most people have to map a characteristic of a product onto a range of prices, they are able to distinguish at best between five and seven levels of value (e.g. five levels might be thought of as equivalent to ‘very bad’, ‘bad’, ‘average’, ‘good’, ‘very good’). Furthermore, we found that judgements of products against prices were not only imprecise, but systematically biased. Consumers generally overestimated what products at the top end of the range were worth and underestimated what products at the bottom end of the range were worth, typically by as much as 10-15 per cent and sometimes more. We then systematically increased the complexity of the products, first by adding more attributes, so that the consumers had to take into account, two, three, then four different characteristics of the product simultaneously. One product might be good on attribute A, not so good on attribute B and available at just above the xii | PRICE Lab: An Investigation of Consumers’ Capabilities with Complex Products average price; another might be very good on A, middling on B, but relatively expensive. Each time the consumer’s task was to judge whether the deal was good or bad. We would then add complexity by introducing attribute C, then attribute D, and so on. Thus, consumers had to negotiate multiple trade-offs. Performance deteriorated quite rapidly once multiple attributes were in play. Even the best performers could not integrate all of the product information efficiently – they became substantially more likely to make mistakes. Once people had to consider four product characteristics simultaneously, all of which contributed equally to the monetary value of the product, a surplus of more than half the price range was required for them to identify a good deal reliably. This was a fundamental finding of the present experiments: once consumers had to take into account more than two or three different factors simultaneously their ability to distinguish good and bad deals became strikingly imprecise. This finding therefore offered a clear answer to our primary research question: a product might be considered ‘complex’ once consumers must take into account more than two or three factors simultaneously in order to judge whether a deal is good or bad. Most of the experiments conducted after we obtained these strong initial findings were designed to test whether consumers could improve on this level of performance, perhaps for certain types of products or with sufficient practice, or whether the performance limits uncovered were likely to apply across many different types of product. An examination of individual differences revealed that some people were significantly better than others at judging good deals from bad ones. However the differences were not large in comparison to the overall effects recorded; everyone tested struggled once there were more than two or three product attributes to contend with. People with high levels of numeracy and educational attainment performed slightly better than those without, but the improvement was small. We also found that both the high level of imprecision and systematic bias were not reduced substantially by giving people substantial practice and opportunities to learn – any improvements were slow and incremental. A series of experiments was also designed to test whether consumers’ capability was different depending on the type of product attribute. In our initial experiments the characteristics of the products were all visual (e.g., size, fineness of texture, etc.). We then performed similar experiments where the relevant product information was supplied as numbers (e.g., percentages, amounts) or in categories (e.g., Type A, Rating D, Brand X), to see whether performance might improve. This question is important, as most financial and contractual information is supplied to consumers in a numeric or categorical form. The results showed clearly that the type of product information did not matter for the level of imprecision and bias in consumers’ decisions – the results were essentially the same whether the product attributes were visual, numeric or categorical. What continued to drive performance was how many characteristics the consumer had to judge simultaneously. Thus, our findings were not the result of people failing to perceive or take in information accurately. Rather, the limiting factor in consumers’ capability was how many different factors they had to weigh against each other at the same time. In most of our experiments the characteristics of the product and its monetary value were related by a one-to-one mapping; each extra unit of an attribute added the same amount of monetary value. In other words, the relationships were all linear. Because other findings in behavioural economics suggest that consumers might struggle more with non-linear relationships, we designed experiments to test them. For example, the monetary value of a product might increase more when the amount of one attribute moves from very low to low, than when it moves from high to very high. We found that this made no difference to either the imprecision or bias in consumers’ decisions provided that the relationship was monotonic (i.e. the direction of the relationship was consistent, so that more or less of the attribute always meant more or less monetary value respectively). When the relationship involved a turning point (i.e. more of the attribute meant higher monetary value but only up to a certain point, after which more of the attribute meant less value) consumers’ judgements were more imprecise still. Finally, we tested whether familiarity with the type of product improved performance. In most of the experiments we intentionally used products that were new to the experimental participants. This was done to ensure experimental control and so that we could monitor learning. In the final experiment reported here, we used two familiar products (Dublin houses and residential broadband packages) and tested whether consumers could distinguish good deals from bad deals any better among these familiar products than they could among products that they had never seen before, but which had the same number and type of attributes and price range. We found that consumers’ performance was the same for these familiar products as for unfamiliar ones. Again, what primarily determined the amount of imprecision and bias in consumers’ judgments was the number of attributes that they had to balance against each other, regardless of whether these were familiar or novel. POLICY IMPLICATIONS There is a menu of consumer polices designed to assist consumers in negotiating complex products. A review, including international examples, is given in the main body of the report. The primary aim is often to simplify the consumer’s task. Potential policies, versions of which already exist in various forms and which cover a spectrum of interventionist strength, might include: the provision and endorsement of independent, transparent price comparison websites and other choice engines (e.g. mobile applications, decision software); the provision of high quality independent consumer advice; ‘mandated simplification’, whereby regulations stipulate that providers must present product information in a simplified and standardised format specifically determined by regulation; and more strident interventions such as devising and enforcing prescriptive rules and regulations in relation to permissible product descriptions, product features or price structures. The present findings have implications for such policies. However, while the experimental findings have implications for policy, it needs to be borne in mind that the evidence supplied here is only one factor in determining whether any given intervention in markets is likely to be beneficial. The findings imply that consumers are likely to struggle to choose well in markets with products consisting of multiple important attributes that must all be factored in when making a choice. Interventions that reduce this kind of complexity for consumers may therefore be beneficial, but nothing in the present research addresses the potential costs of such interventions, or how providers are likely to respond to them. The findings are also general in nature and are intended to give insights into consumer choices across markets. There are likely to be additional factors specific to certain markets that need to be considered in any analysis of the costs and benefits of a potential policy change. Most importantly, the policy implications discussed here are not specific to Ireland or to any particular product market. Furthermore, they should not be read as criticisms of existing regulatory regimes, which already go to some lengths in assisting consumers to deal with complex products. Ireland currently has extensive regulations designed to protect consumers, both in general and in specific markets, descriptions of which can be found in Section 9.1 of the main report. Nevertheless, the experiments described here do offer relevant guidance for future policy designs. For instance, they imply that while policies that make it easier for consumers to switch providers may be necessary to encourage active consumers, they may not be sufficient, especially in markets where products are complex. In order for consumers to benefit, policies that help them to identify better deals reliably may also be required, given the scale of inaccuracy in consumers’ decisions that we record in this report when products have multiple important attributes. Where policies are designed to assist consumer decisions, the present findings imply quite severe limits in relation to the volume of information consumers can simultaneously take into account. Good impartial Executive Summary | xv consumer advice may limit the volume of information and focus on ensuring that the most important product attributes are recognised by consumers. The findings also have implications for the role of competition. While consumers may obtain substantial potential benefits from competition, their capabilities when faced with more complex products are likely to reduce such benefits. Pressure from competition requires sufficient numbers of consumers to spot and exploit better value offerings. Given our results, providers with larger market shares may face incentives to increase the complexity of products in an effort to dampen competitive pressure and generate more market power. Where marketing or pricing practices result in prices or attributes with multiple components, our findings imply that consumer choices are likely to become less accurate. Policymakers must of course be careful in determining whether such practices amount to legitimate innovations with potential consumer benefit. Yet there is a genuine danger that spurious complexity can be generated that confuses consumers and protects market power. The results described here provide backing for the promotion and/or provision by policymakers of high-quality independent choice engines, including but not limited to price comparison sites, especially in circumstances where the number of relevant product attributes is high. A longer discussion of the potential benefits and caveats associated with such policies is contained in the main body of the report. Mandated simplification policies are gaining in popularity internationally. Examples include limiting the number of tariffs a single energy company can offer or standardising health insurance products, both of which are designed to simplify the comparisons between prices and/or product attributes. The present research has some implications for what might make a good mandate. Consumer decisions are likely to be improved where a mandate brings to the consumer’s attention the most important product attributes at the point of decision. The present results offer guidance with respect to how many key attributes consumers are able simultaneously to trade off, with implications for the design of standardised disclosures. While bearing in mind the potential for imposing costs, the results also suggest benefits to compulsory ‘meta-attributes’ (such as APRs, energy ratings, total costs, etc.), which may help consumers to integrate otherwise separate sources of information. FUTURE RESEARCH The experiments described here were designed to produce findings that generalise across multiple product markets. However, in addition to the results outlined in this report, the work has resulted in new experimental methods that can be applied to more specific consumer policy issues. This is possible because the methods generate experimental measures of the accuracy of consumers’ decision-making. As such, they can be adapted to assess the quality of consumers’ decisions in relation to specific products, pricing and marketing practices. Work is underway in PRICE Lab that applies these methods to issues in specific markets, including those for personal loans, energy and mobile phones.
APA, Harvard, Vancouver, ISO, and other styles
7

Georgian Election Observatory (#GEObservatory24): Debunking pre-election propaganda narratives. FOJO media institute, Linnaeus University, 2024. https://doi.org/10.15626/fojo.i.202403.

Full text
Abstract:
The “Georgian Elections Observatory (#GEObservatory24)” was a short-term initiative aimedat fact-checking pre-election narratives leading up to the parliamentary elections on October 26, 2024 together with a few immediate post-election analyses. Unlike traditional fact-checking platforms, this project analyzed entire narratives, combining political analysis with media scrutiny to provide a comprehensive understanding of the pre-election discourse. The project was supported by the Swedish Fojo Media Institute, the Georgian Investigative Media Lab (IML), and the University of Georgia (UG) Security, Policy, and Nationalism Research Center (UGSPN). Dr. Michel Vincent Anderlini, expert on Georgia’s EU-politics, kindly agreed to write a foreword and to add some comments to the texts below (marked with MA in the footnotes).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography