To see the other types of publications on this topic, follow the link: Coarse language.

Journal articles on the topic 'Coarse language'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Coarse language.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Diósi, Lajos. "Coarse graining and decoherence translated into von Neumann language." Physics Letters B 280, no. 1-2 (April 1992): 71–74. http://dx.doi.org/10.1016/0370-2693(92)90774-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Christiansen, Morten H., Pablo Contreras Kallens, and Fabio Trecca. "Toward a Comparative Approach to Language Acquisition." Current Directions in Psychological Science 31, no. 2 (February 23, 2022): 131–38. http://dx.doi.org/10.1177/09637214211049229.

Full text
Abstract:
The world’s languages vary in almost every conceivable way, yet children readily learn their native language. Understanding how children can acquire such a diversity of different languages has been a long-standing goal for psychological science, yet current acquisition research is dominated by studies of children learning one particular language: English. In this article, we argue that progress toward this goal will require systematic comparisons between different languages. We propose three levels of comparison: coarse-grained comparisons contrasting unrelated languages to confirm or refute broad theoretical claims, fine-grained comparisons between closely related languages to investigate the impact of specific factors on acquisition outcomes, and within-language comparisons targeting the impact of socio-communicative differences on learning. This three-pronged comparative approach to language acquisition promises to provide new insights into the mechanisms and processes by which children acquire their native tongue under such varied linguistic and socio-communicative conditions.
APA, Harvard, Vancouver, ISO, and other styles
3

Landau, Barbara, and Ray Jackendoff. "“What” and “where” in spatial language and spatial cognition." Behavioral and Brain Sciences 16, no. 2 (June 1993): 217–38. http://dx.doi.org/10.1017/s0140525x00029733.

Full text
Abstract:
AbstractFundamental to spatial knowledge in all species are the representations underlying object recognition, object search, and navigation through space. But what sets humans apart from other species is our ability to express spatial experience through language. This target article explores the language ofobjectsandplaces, asking what geometric properties are preserved in the representations underlying object nouns and spatial prepositions in English. Evidence from these two aspects of language suggests there are significant differences in the geometric richness with which objects and places are encoded. When an object is named (i.e., with count nouns), detailed geometric properties – principally the object's shape (axes, solid and hollow volumes, surfaces, and parts) – are represented. In contrast, when an object plays the role of either “figure” (located object) or “ground” (reference object) in a locational expression, only very coarse geometric object properties are represented, primarily the main axes. In addition, the spatial functions encoded by spatial prepositions tend to be nonmetric and relatively coarse, for example, “containment,” “contact,” “relative distance,” and “relative direction.” These properties are representative of other languages as well. The striking differences in the way language encodes objects versus places lead us to suggest two explanations: First, there is a tendency for languages to level out geometric detail from both object and place representations. Second, a nonlinguistic disparity between the representations of “what” and “where” underlies how language represents objects and places. The language of objects and places converges with and enriches our understanding of corresponding spatial representations.
APA, Harvard, Vancouver, ISO, and other styles
4

Filho, J. O., S. Masekowsky, T. Schweizer, and W. Rosenstiel. "CGADL: An Architecture Description Language for Coarse-Grained Reconfigurable Arrays." IEEE Transactions on Very Large Scale Integration (VLSI) Systems 17, no. 9 (September 2009): 1247–59. http://dx.doi.org/10.1109/tvlsi.2008.2002429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Warnia Nengsih, M. Mahrus Zein, and Nazifa Hayati. "Coarse-Grained Sentiment Analysis Berbasis Natural Language Processing – Ulasan Hotel." Jurnal Nasional Teknik Elektro dan Teknologi Informasi 10, no. 1 (February 25, 2021): 41–48. http://dx.doi.org/10.22146/jnteti.v10i1.548.

Full text
Abstract:
Sentiment analysis adalah metode untuk memperoleh data dari berbagai platform yang tersedia di internet. Kemajuan teknologi memungkinkan mesin untuk mengenali suatu istilah yang dianggap sebagai opini positif maupun sebaliknya. Data-data dan opini tersebut berperan penting sebagai umpan balik produk, layanan, dan topik lainnya. Tanpa perlu memperoleh opini secara langsung dari masyarakat, pihak penyedia telah mendapatkan evaluasi yang penting guna mengembangkan diri. Bisnis perhotelan merupakan bidang yang terkait dengan jasa memberikan layanan pada pelanggan. Indikator keberlangsungan bisnis ini juga bergantung pada umpan balik pelanggannya dan dijadikan sebagai acuan untuk pengambilan kebijakan strategis. Teknik sentiment analysis berbasis Natural Language Processing dapat mengatasi permasalahan tersebut. Pada makalah ini prediksi dilakukan menggunakan classifier Random Forest (RF), sementara untuk merangkum kualitas classifier, digunakan kurva Receiver Operating Characteristic (ROC). Kurva ROC berupa grafik yang baik untuk merangkum kualitas classifier. Semakin tinggi kurva berada di atas garis diagonal, semakin baik prediksinya, dengan nilai kurva ROC yang diperoleh sebesar 0,90. Terlihat hasil ulasan terhadap opini pelanggan terhadap jasa dan pelayanan yang diberikan oleh hotel untuk kategori positif lebih banyak daripada kategori negatif. Polaritas dari ulasan diperoleh 68% ulasan pelanggan berada pada area positif dan 32% berada pada area negatif.
APA, Harvard, Vancouver, ISO, and other styles
6

Bessière, Christian, Jean-Charles Régin, Roland H. C. Yap, and Yuanlin Zhang. "An optimal coarse-grained arc consistency algorithm." Artificial Intelligence 165, no. 2 (July 2005): 165–85. http://dx.doi.org/10.1016/j.artint.2005.02.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pierrehumbert, Janet. "Why phonological constraints are so coarse-grained." Language and Cognitive Processes 16, no. 5-6 (October 2001): 691–98. http://dx.doi.org/10.1080/01690960143000218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Beeman, Mark, Rhonda B. Friedman, Jordan Grafman, Enrique Perez, Sherri Diamond, and Miriam Beadle Lindsay. "Summation Priming and Coarse Semantic Coding in the Right Hemisphere." Journal of Cognitive Neuroscience 6, no. 1 (January 1994): 26–45. http://dx.doi.org/10.1162/jocn.1994.6.1.26.

Full text
Abstract:
There are now numerous observations of subtle right hemisphere (RH) contributions to language comprehension. It has been suggested that these contributions reflect coarse semantic coding in the RH. That is, the RH weakly activates large semantic fields—including concepts distantly related to the input word—whereas the left hemisphere (LH) strongly activates small semantic fields—limited to concepts closely related to the input (Beeman, 1993a,b). This makes the RH less effective at interpreting single words, but more sensitive to semantic overlap of multiple words. To test this theory, subjects read target words preceded by either “Summation” primes (three words each weakly related to the target) or Unrelated primes (three unrelated words), and target exposure duration was manipulated so that subjects correctly named about half the target words in each hemifield. In Experiment 1, subjects benefited more from Summation primes when naming target words presented to the left visual field-RH (Ivf-RH) than when naming target words presented to the right visual field-LH (rvf-LH), suggesting a RH advantage in coarse semantic coding. In Experiment 2, with a low proportion of related prime-target trials, subjects benefited more from “Direct” primes (one strong associate flanked by two unrelated words) than from Summation primes for rvf-LH target words, indicating that the LH activates closely related information much more strongly than distantly related information. Subjects benefited equally from both prime types for Ivf-RH target words, indicating that the RH activates closely related information only slightly more strongly, at best, than distantly related information. This suggests that the RH processes words with relatively coarser coding than the LH, a conclusion consistent with a recent suggestion that the RH coarsely codes visual input (Kosslyn, Chabris, Mar-solek, & Koenig, 1992).
APA, Harvard, Vancouver, ISO, and other styles
9

Wasserscheidt, Philipp. "Explaining Code-Switching. Matrix Language Models vs. Bilingual Construction Grammar." Književni jezik, no. 31 (December 2020): 57–87. http://dx.doi.org/10.33669/kj2020-31-04.

Full text
Abstract:
This paper challenges the concept of matrix, base or basic language used in many descriptions and models of insertional code-switching. It proposes an account based on Construction Grammar and usage-based principles. At the heart of the paper is a discussion of four problematic issues of matrix-language approaches: the unitary conception of the notion of language, the generalization that syntactic frames mirror languages, the missing independent evidence for a matrix language and the narrow scope of the models that employ this term. The proposed approach of Bilingual Construction Grammar instead operates with a more complex, usage-based concept of language affiliation and places constructions in the centre of speech production. It thus avoids too coarse global predictions in favour of construction-specific predictions. This way, the matrix-language effect can be reinterpreted as by-product of constructional processing. Instead of using the term matrix language it is thus more appropriate to speak of matrix constructions.
APA, Harvard, Vancouver, ISO, and other styles
10

Jamatia, Anupam, Amitava Das, and Björn Gambäck. "Deep Learning-Based Language Identification in English-Hindi-Bengali Code-Mixed Social Media Corpora." Journal of Intelligent Systems 28, no. 3 (July 26, 2019): 399–408. http://dx.doi.org/10.1515/jisys-2017-0440.

Full text
Abstract:
Abstract This article addresses language identification at the word level in Indian social media corpora taken from Facebook, Twitter and WhatsApp posts that exhibit code-mixing between English-Hindi, English-Bengali, as well as a blend of both language pairs. Code-mixing is a fusion of multiple languages previously mainly associated with spoken language, but which social media users also deploy when communicating in ways that tend to be rather casual. The coarse nature of code-mixed social media text makes language identification challenging. Here, the performance of deep learning on this task is compared to feature-based learning, with two Recursive Neural Network techniques, Long Short Term Memory (LSTM) and bidirectional LSTM, being contrasted to a Conditional Random Fields (CRF) classifier. The results show the deep learners outscoring the CRF, with the bidirectional LSTM demonstrating the best language identification performance.
APA, Harvard, Vancouver, ISO, and other styles
11

METUKI, NILI, SHANI SINKEVICH, and MICHAL LAVIDOR. "Lateralization of semantic processing is shaped by exposure to specific mother tongues: The case of insight problem solving by bilingual and monolingual native Hebrew speakers." Bilingualism: Language and Cognition 16, no. 4 (February 15, 2013): 900–913. http://dx.doi.org/10.1017/s1366728913000023.

Full text
Abstract:
Solving insight problems is a complex task found to involve coarse semantic processing in the right hemisphere when tested in English. In Hebrew, the left hemisphere (LH) may be more active in this task, due to the inter-hemispheric interaction between semantic, phonological and orthographic processing. In two Hebrew insight problems experiments, we revealed a performance advantage in the LH, in contrast to the patterns previously observed in English. A third experiment, conducted in English with early Hebrew–English bilinguals, confirmed that the LH advantage found with Hebrew speakers does not depend on specific task requirements in Hebrew. We suggest that Hebrew speakers show redundancy between the hemispheres in coarse semantic processing in handling frequent lexical ambiguities stemming from the orthographic structure in Hebrew. We further suggest that inter-hemispheric interactions between linguistic and non-linguistic processes may determine the hemisphere in which coarse coding will take place. These findings highlight the possible effect of exposure to a specific mother tongue on the lateralization of processes in the brain, and carries possible theoretical and methodological implications for cross-language studies.
APA, Harvard, Vancouver, ISO, and other styles
12

Prabowo, Dimas Setiaji, and Mulyana Mulyana. "Bahasa kasar dialek Banyumasan." LingTera 5, no. 2 (October 31, 2018): 99–111. http://dx.doi.org/10.21831/lt.v5i2.17819.

Full text
Abstract:
Penelitian ini bertujuan untuk menjelaskan bahasa kasar Dialek Banyumasan di Desa Kedungreja Kabupaten Cilacap. Penelitian ini menjelaskan wujud, referen, dan fungsi bahasa kasar Dialek Banyumasan yang digunakan di Desa Kedungreja Kabupaten Cilacap. Penelitian ini termasuk ke dalam jenis penelitian deskriptif. Data dari penelitian ini yaitu kata-kata kasar dialek Banyumasan di dalam masyarakat Desa Kedungreja Kabupaten Cilacap. Sumber data dari penelitian ini yaitu tuturan lisan masyarakat Desa Kedungreja Kabupaten Cilacap dalam kegiatan sehari-hari di tempat yang banyak terjadi proses interaksi dari para penuturnya. Langkah-langkah pengumpulan data dari penelitian ini yaitu dengan cara teknik menyimak, teknik sadap, teknik rekam, dan teknik catat. Teknik analisis data yang digunakan dalam penelitian ini adalah dengan cara teknik analisis sosio pragmatik. Untuk mendapatkan validitas data dengan menggunakan validitas triangulasi teori, validitas semantik, dan pertimbangan ahli, sedangkan untuk mendapatkan reliabilitas menggunakan reliabilitas stabilitas. Hasil penelitian ini menjelaskan tentang wujud, referen, dan fungsi bahasa kasar Dialek Banyumasan di Desa Kedungreja Kabupaten Cilacap. Wujud bahasa kasar Dialek Banyumasan di Desa Kedungreja Kabupaten Cilacap yaitu kata dasar, kata berimbuhan, dan frasa. Referensi bahasa kasar Dialek Banyumasan di Desa Kedungreja Kabupaten Cilacap yang ditemukan meliputi referen nama hewan, bagian tubuh, jenis makanan, kata benda, kotoran, keadaan seseseorang, keadaan tertentu, dan kegiatan tertentu. Fungsi bahasa kasar dialek Banyumasan di Desa Kedungreja Kabupaten Cilacap digunakan untuk menjelaskan rasa marah, rasa jengkel, rasa kecewa, menghina orang lain, rasa menyesal, dan rasa heran. Harsh language of Banyumasan dialect AbstractThe aim of this research was to explain the coarse language of Banyumasan dialect in Kedungreja village, Cilacap Regency. This study explained form, reference, and function of the coarse Language of Banyumasan dialect used in Kedungreja village, Cilacap Regency. This study employed descriptive research. The data of the research was the coarse words of Banyumasan dialect in Kedungreja Village, Cilacap Regency. The data resource of this research was the expression of Kedungreja villagers, Cilacap Regency in daily activities, such as at market, youth association, at pos kamling, and many more. The steps of gathering data for this research were observing technique, tapping technique, recording, and writing the data of research. The technique of data analysis in this research was sociopragmatic analysis technique. This study applied validity triangulation theory, semantic validity, and expert consideration to get data validity, while to get reliability, it implied reliability stability. The result of this research is a description about form, reference, and function of the coarse language of Banyumasan dialect in Kedungreja Village, Cilacap Regency. The forms of the coarse Language of Banyumasan dialect in Kedungreja Village, Cilacap Regency were basic words, affixes, and phrases. The types of the course language of Banyumasan dialect in Kedungreja village, Cilacap Regency, were namely noun/noun phrases, adjective/adjective phrases, and verb /verb phrases. The references of the coarse language of Banyumasan dialect in Kedungreja Village, Cilacap Regency were animal’s name, parts of the body, type of food, noun, filth, someone’s condition, certain condition, and certain activity. The function of the course language of Banyumasan dialect in Kedungreja Village, Cilacap Regency, was namely to express anger, irritation, disappointment, insulting others, regret and astonishment.
APA, Harvard, Vancouver, ISO, and other styles
13

Fabry, Jan, and Wilken Engelbrecht. "Vandaag heeft hij weer een pesthumeur, wat een klerelijer is dat toch! Over de historische invloed van pandemieën op Nederlandse verwensingen." Roczniki Humanistyczne 69, no. 5 Zeszyt specjalny (December 30, 2021): 43–53. http://dx.doi.org/10.18290/rh21696s-3.

Full text
Abstract:
Recent literature on politeness in language has observed that people are nowadays swearing more and use coarse language in which disease terms clearly prevail. This article aims to investigate the influence of past pandemics on Dutch. After a short historical introduction, the lexical traces of the various pandemics are discussed.
APA, Harvard, Vancouver, ISO, and other styles
14

PALMER, MARTHA, HOA TRANG DANG, and CHRISTIANE FELLBAUM. "Making fine-grained and coarse-grained sense distinctions, both manually and automatically." Natural Language Engineering 13, no. 2 (July 12, 2006): 137–63. http://dx.doi.org/10.1017/s135132490500402x.

Full text
Abstract:
In this paper we discuss a persistent problem arising from polysemy: namely the difficulty of finding consistent criteria for making fine-grained sense distinctions, either manually or automatically. We investigate sources of human annotator disagreements stemming from the tagging for the English Verb Lexical Sample Task in the SENSEVAL-2 exercise in automatic Word Sense Disambiguation. We also examine errors made by a high-performing maximum entropy Word Sense Disambiguation system we developed. Both sets of errors are at least partially reconciled by a more coarse-grained view of the senses, and we present the groupings we use for quantitative coarse-grained evaluation as well as the process by which they were created. We compare the system's performance with our human annotator performance in light of both fine-grained and coarse-grained sense distinctions and show that well-defined sense groups can be of value in improving word sense disambiguation by both humans and machines.
APA, Harvard, Vancouver, ISO, and other styles
15

Olejarczuk, Paul, and Vsevolod Kapatsinski. "The metrical parse is guided by gradient phonotactics." Phonology 35, no. 3 (August 2018): 367–405. http://dx.doi.org/10.1017/s0952675718000106.

Full text
Abstract:
Phonotactic generalisations can be computed at different levels of granularity, from a coarse-grained legal/illegal dichotomy (blick,dwick≻ *bnick, *lbick) to a fine-grained gradient of acceptability (blick≻dwick≻bnick≻lbick). This article investigates the sensitivity of the English metrical parse to the granularity of medial onset phonotactics. We present two experiments that feature pseudo-words with medial consonants and CC clusters varying in word-edge frequency and sonority (e.g.vatablick,vatadwick,vatabnick,vatalbick). The metrical parse is inferred from a hyphenation experiment and an online stress-assignment experiment. The results of both studies indicate that the parse is stochastic, and guided by relatively fine-grained phonotactic dependencies. Vocabulary simulations suggest that this level of granularity may arise because the gradient parser consistently outperforms the coarse-grained alternative across the developing lexicon.
APA, Harvard, Vancouver, ISO, and other styles
16

Lagerkvist, Victor, and Magnus Wahlström. "The (Coarse) Fine-Grained Structure of NP-Hard SAT and CSP Problems." ACM Transactions on Computation Theory 14, no. 1 (March 31, 2022): 1–54. http://dx.doi.org/10.1145/3492336.

Full text
Abstract:
We study the fine-grained complexity of NP-complete satisfiability (SAT) problems and constraint satisfaction problems (CSPs) in the context of the strong exponential-time hypothesis (SETH) , showing non-trivial lower and upper bounds on the running time. Here, by a non-trivial lower bound for a problem SAT (Γ) (respectively CSP (Γ)) with constraint language Γ, we mean a value c 0 > 1 such that the problem cannot be solved in time O ( c n ) for any c < c 0 unless SETH is false, while a non-trivial upper bound is simply an algorithm for the problem running in time O ( c n ) for some c < 2. Such lower bounds have proven extremely elusive, and except for cases where c 0 =2 effectively no such previous bound was known. We achieve this by employing an algebraic framework, studying constraint languages Γ in terms of their algebraic properties. We uncover a powerful algebraic framework where a mild restriction on the allowed constraints offers a concise algebraic characterization. On the relational side we restrict ourselves to Boolean languages closed under variable negation and partial assignment, called sign-symmetric languages. On the algebraic side this results in a description via partial operations arising from system of identities, with a close connection to operations resulting in tractable CSPs, such as near unanimity operations and edge operations . Using this connection we construct improved algorithms for several interesting classes of sign-symmetric languages, and prove explicit lower bounds under SETH. Thus, we find the first example of an NP-complete SAT problem with a non-trivial algorithm which also admits a non-trivial lower bound under SETH. This suggests a dichotomy conjecture with a close connection to the CSP dichotomy theorem: an NP-complete SAT problem admits an improved algorithm if and only if it admits a non-trivial partial invariant of the above form.
APA, Harvard, Vancouver, ISO, and other styles
17

Nowakowski, Karol, Michal Ptaszynski, and Fumito Masui. "MiNgMatch—A Fast N-gram Model for Word Segmentation of the Ainu Language." Information 10, no. 10 (October 16, 2019): 317. http://dx.doi.org/10.3390/info10100317.

Full text
Abstract:
Word segmentation is an essential task in automatic language processing for languages where there are no explicit word boundary markers, or where space-delimited orthographic words are too coarse-grained. In this paper we introduce the MiNgMatch Segmenter—a fast word segmentation algorithm, which reduces the problem of identifying word boundaries to finding the shortest sequence of lexical n-grams matching the input text. In order to validate our method in a low-resource scenario involving extremely sparse data, we tested it with a small corpus of text in the critically endangered language of the Ainu people living in northern parts of Japan. Furthermore, we performed a series of experiments comparing our algorithm with systems utilizing state-of-the-art lexical n-gram-based language modelling techniques (namely, Stupid Backoff model and a model with modified Kneser-Ney smoothing), as well as a neural model performing word segmentation as character sequence labelling. The experimental results we obtained demonstrate the high performance of our algorithm, comparable with the other best-performing models. Given its low computational cost and competitive results, we believe that the proposed approach could be extended to other languages, and possibly also to other Natural Language Processing tasks, such as speech recognition.
APA, Harvard, Vancouver, ISO, and other styles
18

Beck, Daniel, Trevor Cohn, Christian Hardmeier, and Lucia Specia. "Learning Structural Kernels for Natural Language Processing." Transactions of the Association for Computational Linguistics 3 (December 2015): 461–73. http://dx.doi.org/10.1162/tacl_a_00151.

Full text
Abstract:
Structural kernels are a flexible learning paradigm that has been widely used in Natural Language Processing. However, the problem of model selection in kernel-based methods is usually overlooked. Previous approaches mostly rely on setting default values for kernel hyperparameters or using grid search, which is slow and coarse-grained. In contrast, Bayesian methods allow efficient model selection by maximizing the evidence on the training data through gradient-based methods. In this paper we show how to perform this in the context of structural kernels by using Gaussian Processes. Experimental results on tree kernels show that this procedure results in better prediction performance compared to hyperparameter optimization via grid search. The framework proposed in this paper can be adapted to other structures besides trees, e.g., strings and graphs, thereby extending the utility of kernel-based methods.
APA, Harvard, Vancouver, ISO, and other styles
19

Jamatia, Anupam, Steve Durairaj Swamy, Björn Gambäck, Amitava Das, and Swapan Debbarma. "Deep Learning Based Sentiment Analysis in a Code-Mixed English-Hindi and English-Bengali Social Media Corpus." International Journal on Artificial Intelligence Tools 29, no. 05 (August 2020): 2050014. http://dx.doi.org/10.1142/s0218213020500141.

Full text
Abstract:
Sentiment analysis is a circumstantial analysis of text, identifying the social sentiment to better understand the source material. The article addresses sentiment analysis of an English-Hindi and English-Bengali code-mixed textual corpus collected from social media. Code-mixing is an amalgamation of multiple languages, which previously mainly was associated with spoken language. However, social media users also deploy it to communicate in ways that tend to be somewhat casual. The coarse nature of social media text poses challenges for many language processing applications. Here, the focus is on the low predictive nature of traditional machine learners when compared to Deep Learning counterparts, including the contextual language representation model BERT (Bidirectional Encoder Representations from Transformers), on the task of extracting user sentiment from code-mixed texts. Three deep learners (a BiLSTM CNN, a Double BiLSTM and an Attention-based model) attained accuracy 20–60% greater than traditional approaches on code-mixed data, and were for comparison also tested on monolingual English data.
APA, Harvard, Vancouver, ISO, and other styles
20

ARNON, Inbal. "The Starting Big approach to language learning." Journal of Child Language 48, no. 5 (July 5, 2021): 937–58. http://dx.doi.org/10.1017/s0305000921000386.

Full text
Abstract:
AbstractThe study of language acquisition has a long and contentious history: researchers disagree on what drives this process, the relevant data, and the interesting questions. Here, I outline the Starting Big approach to language learning, which emphasizes the role of multiword units in language, and of coarse-to-fine processes in learning. I outline core predictions and supporting evidence. In short, the approach argues that multiword units are integral building blocks in language; that such units can facilitate mastery of semantically opaque relations between words; and that adults rely on them less than children, which can explain (some of) their difficulty in learning a second language. The Starting Big approach is a theory of how children learn language, how language is represented, and how to explain differences between first and second language learning. I discuss the learning and processing models at the heart of the approach and their cross-linguistic implications.
APA, Harvard, Vancouver, ISO, and other styles
21

Tomlin, Russell S., and Victor Villa. "Attention in Cognitive Science and Second Language Acquisition." Studies in Second Language Acquisition 16, no. 2 (June 1994): 183–203. http://dx.doi.org/10.1017/s0272263100012870.

Full text
Abstract:
This paper examines how the cognitive notion of attention has been employed in SLA and how it is understood in cognitive science. It summarizes recent research on attention from cognitive and neuroscience approaches. Some reformulations of problems raised in SLA research related to attention are proposed. Current research offers detailed ideas about attention and its component processes. These ideas, elaborated theoretically and empirically in cognitive neuroscience, may help untangle some important but difficult issues in SLA. Early, coarse-grained conceptions of attention, such as the limited-capacity metaphor or the automatic versus controlled processing dichotomy, are recast into an integrated human attention system with three separate yet interrelated networks: alertness, orientation, and detection. This finer grained analysis of attention is employed in a model of the role of attention in SLA.
APA, Harvard, Vancouver, ISO, and other styles
22

Simončič, Samo, Melita Kompolšek, and Primož Podržaj. "AN ADVANCED COARSE-FINE SEARCH APPROACH FOR DIGITAL IMAGE CORRELATION APPLICATIONS." Facta Universitatis, Series: Mechanical Engineering 14, no. 1 (April 1, 2016): 63. http://dx.doi.org/10.22190/fume1601063s.

Full text
Abstract:
The paper presents a newly developed fine search algorithm used in the application of digital correlation. In order to evaluate its performance a special purpose application was developed using C# programming language. The algorithm was then tested on a pre-prepared set of the computer generated speckled images. It turned out to be much faster than the conventional fine search algorithm. Consequently, it is a major step forward in a never ending quest for a fast digital correlation execution with sub pixel accuracy.
APA, Harvard, Vancouver, ISO, and other styles
23

Jeník, Jan. "Oronyms of Central European Mountains Divided by national boundaries." Geografie 103, no. 2 (1998): 101–7. http://dx.doi.org/10.37040/geografie1998103020101.

Full text
Abstract:
A lot of confusion is encountered in coarse-scale maps in atlases from English speaking countries as regards oronyms of the mountains situated at the edge of the Bohemian Massif. Political and administrative boundaries often cut these regions. There is no general rule: either only trans boundary oronyms spelled in randomly chosen language are shown, or national oronyms along the political boundaries are used. The oronyms belonging to the "Bohemian Forest" take a number of different forms; in the "Ore Mountains" mostly the German name "Erzgebirge" is used. In the region along the Czech/German/Polish boundary the transboundary oronym "Sudetes" - in either of its four languague forms - is used.
APA, Harvard, Vancouver, ISO, and other styles
24

Qi, Wu, Sun Suyu, Gao Guangliang, Fang Yi, and Chen Guoxing. "3D Morphology Distribution Characteristics and Discrete Element Simulation of Sand-Gravel Mixtures." Geofluids 2021 (November 22, 2021): 1–10. http://dx.doi.org/10.1155/2021/7101900.

Full text
Abstract:
Sand-gravel mixtures are typical binary materials, exhibiting highly heterogeneous, discontinuous, and significant structural effects. The contact state between sand and gravel particles has a significant influence on the mechanical properties of the mixtures. This article focused on the complex internal structure and its mesostructural behavior of the mixtures, and a systematic statistical analysis was carried out to study the shape, size, and angularity of the coarse particles. The three-dimensional (3D) shapes of coarse aggregates were approximated to be hexahedron, pentahedron, and tetrahedron. An indicator called angularity and surface texture (AT) index was developed to characterize the combined effect of the coarse aggregate angularity and surface texture. Based on the screening testing and digital image processing, the particle size and AT index of aggregates were extracted, and their means, standard deviations, and statistical distributions were studied. An algorithm for generating 3D aggregates was developed based on the statistical results of the coarse aggregate 3D morphology. The coarse aggregate generating code was written using the fish language in PFC3D. The numerical model was then applied to conduct three typical monotonic or cyclic triaxial test simulations. Retrospective simulation of the laboratory tests using the proposed model showed good agreement, and the reliability of the model is effectively verified. The results interpreted well the mechanism of particle motion and the distribution of interparticle contact force during shearing from mesoscale of the mixtures, which can give better understanding and modeling of the nonlinear behavior of the sand-gravel mixtures.
APA, Harvard, Vancouver, ISO, and other styles
25

Jim, Kam-Chuen, and C. Lee Giles. "Talking Helps: Evolving Communicating Agents for the Predator-Prey Pursuit Problem." Artificial Life 6, no. 3 (July 2000): 237–54. http://dx.doi.org/10.1162/106454600568861.

Full text
Abstract:
We analyze a general model of multi-agent communication in which all agents communicate simultaneously to a message board. A genetic algorithm is used to evolve multi-agent languages for the predator agents in a version of the predator-prey pursuit problem. We show that the resulting behavior of the communicating multi-agent system is equivalent to that of a Mealy finite state machine whose states are determined by the agents' usage of the evolved language. Simulations show that the evolution of a communication language improves the performance of the predators. Increasing the language size (and thus increasing the number of possible states in the Mealy machine) improves the performance even further. Furthermore, the evolved communicating predators perform significantly better than all previous work on similar prey. We introduce a method for incrementally increasing the language size, which results in an effective coarse-to-fine search that significantly reduces the evolution time required to find a solution. We present some observations on the effects of language size, experimental setup, and prey difficulty on the evolved Mealy machines. In particular, we observe that the start state is often revisited, and incrementally increasing the language size results in smaller Mealy machines. Finally, a simple rule is derived that provides a pessimistic estimate on the minimum language size that should be used for any multi-agent problem.
APA, Harvard, Vancouver, ISO, and other styles
26

Koby, Geoffrey S. "Revising Biblical Translation: Luther's Lexical Choices in Matthew between 1522 (Septembertestament) and 1545, Compared with the Greek Source Text." American Journal of Germanic Linguistics and Literatures 7, no. 2 (1995): 207–46. http://dx.doi.org/10.1017/s1040820700001608.

Full text
Abstract:
After Martin Luther first translated and published the New Testament in 1522, he immediately began the work of revision—work that would last through his lifetime and beyond. Working with a group of biblical scholars, he made thousands of changes to the text, continuing until his death in 1546. Although some critics have seen Luther's earlier language as vulgar and coarse—particularly in the Gospels— and have suggested that he refined his language over time, others suggest that a more differentiated view is necessary. This article examines the lexical differences in the Gospel of Matthew between the Septembertestament of 1522 and the last Bible published during Luther's lifetime, in 1545. Major lexical changes are compared with the Greek source text, and assigned to three major classes: (I) changes that bring the translation closer to the original Greek meaning; (II) changes that diverge from a close rendering of the source text, for comprehension or esthetic reasons; and (III) changes that are neutral with regard to the source, originating from target language (German) considerations. Most major changes arise from either the source text or understandability considerations. The original lexical choices in the 1522 version are not as coarse or extreme as some have suggested.
APA, Harvard, Vancouver, ISO, and other styles
27

Fang, Kuncheng, Lian Zhou, Cheng Jin, Yuejie Zhang, Kangnian Weng, Tao Zhang, and Weiguo Fan. "Fully Convolutional Video Captioning with Coarse-to-Fine and Inherited Attention." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 8271–78. http://dx.doi.org/10.1609/aaai.v33i01.33018271.

Full text
Abstract:
Automatically generating natural language description for video is an extremely complicated and challenging task. To tackle the obstacles of traditional LSTM-based model for video captioning, we propose a novel architecture to generate the optimal descriptions for videos, which focuses on constructing a new network structure that can generate sentences superior to the basic model with LSTM, and establishing special attention mechanisms that can provide more useful visual information for caption generation. This scheme discards the traditional LSTM, and exploits the fully convolutional network with coarse-to-fine and inherited attention designed according to the characteristics of fully convolutional structure. Our model cannot only outperform the basic LSTM-based model, but also achieve the comparable performance with those of state-of-the-art methods
APA, Harvard, Vancouver, ISO, and other styles
28

Mairesse, François, and Steve Young. "Stochastic Language Generation in Dialogue using Factored Language Models." Computational Linguistics 40, no. 4 (December 2014): 763–99. http://dx.doi.org/10.1162/coli_a_00199.

Full text
Abstract:
Most previous work on trainable language generation has focused on two paradigms: (a) using a generation decisions of an existing generator. Both approaches rely on the existence of a handcrafted generation component, which is likely to limit their scalability to new domains. The first contribution of this article is to present Bagel, a fully data-driven generation method that treats the language generation task as a search for the most likely sequence of semantic concepts and realization phrases, according to Factored Language Models (FLMs). As domain utterances are not readily available for most natural language generation tasks, a large creative effort is required to produce the data necessary to represent human linguistic variation for nontrivial domains. This article is based on the assumption that learning to produce paraphrases can be facilitated by collecting data from a large sample of untrained annotators using crowdsourcing—rather than a few domain experts—by relying on a coarse meaning representation. A second contribution of this article is to use crowdsourced data to show how dialogue naturalness can be improved by learning to vary the output utterances generated for a given semantic input. Two data-driven methods for generating paraphrases in dialogue are presented: (a) by sampling from the n-best list of realizations produced by Bagel's FLM reranker; and (b) by learning a structured perceptron predicting whether candidate realizations are valid paraphrases. We train Bagel on a set of 1,956 utterances produced by 137 annotators, which covers 10 types of dialogue acts and 128 semantic concepts in a tourist information system for Cambridge. An automated evaluation shows that Bagel outperforms utterance class LM baselines on this domain. A human evaluation of 600 resynthesized dialogue extracts shows that Bagel's FLM output produces utterances comparable to a handcrafted baseline, whereas the perceptron classifier performs worse. Interestingly, human judges find the system sampling from the n-best list to be more natural than a system always returning the first-best utterance. The judges are also more willing to interact with the n-best system in the future. These results suggest that capturing the large variation found in human language using data-driven methods is beneficial for dialogue interaction.
APA, Harvard, Vancouver, ISO, and other styles
29

Perisic, Branko, Gordana Milosavljevic, Igor Dejanovic, and Branko Milosavljevic. "UML profile for specifying user interfaces of business applications." Computer Science and Information Systems 8, no. 2 (2011): 405–26. http://dx.doi.org/10.2298/csis110112010p.

Full text
Abstract:
This paper presents an approach to automatic user interface code generation that is based on an internal HCI standard that defines layout and behaviour of coarse-grained objects for enterprise business applications. A domain-specific language (in the form of a UML profile) based on the concepts introduced by the HCI standard facilitates efficient modeling and generation of fully-functional UIs. Being a regular UML extension, this language can be used in any general-purpose UML modeling tool and can easily be integrated with other UML-based models of the application.
APA, Harvard, Vancouver, ISO, and other styles
30

LAURE, ERWIN, PIYUSH MEHROTRA, and HANS ZIMA. "OPUS: HETEROGENEOUS COMPUTING WITH DATA PARALLEL TASKS." Parallel Processing Letters 09, no. 02 (June 1999): 275–89. http://dx.doi.org/10.1142/s0129626499000256.

Full text
Abstract:
The coordination language Opus is an object-based extension of High Performance Fortran (HPF) that supports the integration of coarse-grain task parallelism with HPF-style data parallelism. In this paper we discuss Opus in the Context of multidisciplinary applications (MDAs) which execute in a heterogencous environment. After outlining the major properties of such applications and a number of different approaches towards providing language and tool support for MDAs we describe the salientfeatures of Opus and its implementation, emphasizing the issues related to the coordination of data-parallel HPF programs in a heterogencous environment.
APA, Harvard, Vancouver, ISO, and other styles
31

ZRIBI BEN OTHMANE, CHIRAZ, FERIEL BEN FRAJ, and ICHRAF LIMAM. "POS-tagging arabic texts: A novel approach based on ant colony." Natural Language Engineering 23, no. 3 (February 11, 2016): 419–39. http://dx.doi.org/10.1017/s1351324915000480.

Full text
Abstract:
AbstractThe specificities of the Arabic language, mainly agglutination and vocalization make the task of POS-tagging more difficult than for Indo-European languages. Consequently, POS-tagging texts with good accuracy remains a challenging problem for Arabic language processing applications. In this work, we consider the task of POS-tagging as an optimization problem modeled as a graph whose nodes correspond to all possible grammatical tags given by a morphological analyzer for words in a sentence and the goal is to find the best path (sequence of tags) in this graph. To resolve this problem, we propose a novel approach based on ant colony. Ant colony-based algorithms are among the most efficient methods to resolve optimization problems modeled as a graph. The collaboration of ants having various knowledge creates a collective intelligence and increases efficiency. We have performed experiments on both vocalized and non-vocalized texts and tested two different tagsets containing fine and coarse grained composite tags. The obtained results showed good accuracy rates and hence, the benefits of swarm intelligence for the POS-tagging problem.
APA, Harvard, Vancouver, ISO, and other styles
32

Felice, Giulio, Franco Orsucci, Andrea Scozzari, Omar Gelo, Gabriele Serafini, Silvia Andreassi, Nicoletta Vegni, et al. "What Differentiates Poor and Good Outcome Psychotherapy? A Statistical-Mechanics-Inspired Approach to Psychotherapy Research." Systems 7, no. 2 (April 16, 2019): 22. http://dx.doi.org/10.3390/systems7020022.

Full text
Abstract:
Statistical mechanics investigates how emergent properties of macroscopic systems (such as temperature and pressure) relate to microscopic state fluctuations. The underlying idea is that global statistical descriptors of order and variability can monitor the relevant dynamics of the whole system at hand. Here we test the possibility of extending such an approach to psychotherapy research investigating the possibility of predicting the outcome of psychotherapy on the sole basis of coarse-grained empirical macro-parameters. Four good-outcome and four poor-outcome brief psychotherapies were recorded, and their transcripts coded in terms of standard psychological categories (abstract, positive emotional and negative emotional language pertaining to patient and therapist). Each patient-therapist interaction is considered as a discrete multivariate time series made of subsequent word-blocks of 150-word length, defined in terms of the above categories. “Static analyses” (Principal Component Analysis) highlighted a substantial difference between good-outcome and poor-outcome cases in terms of mutual correlations among those descriptors. In the former, the patient’s use of abstract language correlated with therapist’s emotional negative language, while in the latter it co-varied with therapist’s emotional positive language, thus showing the different judgment of the therapists regarding the same variable (abstract language) in poor and good outcome cases. On the other hand, the “dynamic analyses”, based on five coarse-grained descriptors related to variability, the degree of order and complexity of the series, demonstrated a relevant case-specific effect, pointing to the possibility of deriving a consistent picture of any single psychotherapeutic process. Overall, the results showed that the systemic approach to psychotherapy (an old tenet of psychology) is mature enough to shift from a metaphorical to a fully quantitative status.
APA, Harvard, Vancouver, ISO, and other styles
33

Yu, Xiang, Yu Qiao, Qingpeng Li, Gang Xu, Chuanxiong Kang, Claudio Estevez, Chengzhi Deng, and Shengqian Wang. "Parallelizing Comprehensive Learning Particle Swarm Optimization by Open Computing Language on an Integrated Graphical Processing Unit." Complexity 2020 (July 31, 2020): 1–17. http://dx.doi.org/10.1155/2020/6589658.

Full text
Abstract:
Comprehensive learning particle swarm optimization (CLPSO) is a powerful metaheuristic for global optimization. This paper studies parallelizing CLPSO by open computing language (OpenCL) on the integrated Intel HD Graphics 520 (IHDG520) graphical processing unit (GPU) with a low clock rate. We implement a coarse-grained all-GPU model that maps each particle to a separate work item. Two enhancement strategies, namely, generating and transferring random numbers from the central processor to the GPU as well as reducing the number of instructions in the kernel, are proposed to shorten the model’s execution time. This paper further investigates parallelizing deterministic optimization for implicit stochastic optimization of China’s Xiaowan Reservoir. The deterministic optimization is performed on an ensemble of 62 years’ historical inflow records with monthly time steps, is solved by CLPSO, and is parallelized by a coarse-grained multipopulation model extended from the all-GPU model. The multipopulation model involves a large number of work items. Because of the capacity limit for a buffer transferring data from the central processor to the GPU and the size of the global memory region, the random number generation strategy is modified by generating a small number of random numbers that can be flexibly exploited by the large number of work items. Experiments conducted on various benchmark functions and the case study demonstrate that our proposed all-GPU and multipopulation parallelization models are appropriate; and the multipopulation model achieves the consumption of significantly less execution time than the corresponding sequential model.
APA, Harvard, Vancouver, ISO, and other styles
34

Carpentier, Sarah M., Sylvain Moreno, and Anthony R. McIntosh. "Short-term Music Training Enhances Complex, Distributed Neural Communication during Music and Linguistic Tasks." Journal of Cognitive Neuroscience 28, no. 10 (October 2016): 1603–12. http://dx.doi.org/10.1162/jocn_a_00988.

Full text
Abstract:
Musical training is frequently associated with benefits to linguistic abilities, and recent focus has been placed on possible benefits of bilingualism to lifelong executive functions; however, the neural mechanisms for such effects are unclear. The aim of this study was to gain better understanding of the whole-brain functional effects of music and second-language training that could support such previously observed cognitive transfer effects. We conducted a 28-day longitudinal study of monolingual English-speaking 4- to 6-year-old children randomly selected to receive daily music or French language training, excluding weekends. Children completed passive EEG music note and French vowel auditory oddball detection tasks before and after training. Brain signal complexity was measured on source waveforms at multiple temporal scales as an index of neural information processing and network communication load. Comparing pretraining with posttraining, musical training was associated with increased EEG complexity at coarse temporal scales during the music and French vowel tasks in widely distributed cortical regions. Conversely, very minimal decreases in complexity at fine scales and trends toward coarse-scale increases were displayed after French training during the tasks. Spectral analysis failed to distinguish between training types and found overall theta (3.5–7.5 Hz) power increases after all training forms, with spatially fewer decreases in power at higher frequencies (>10 Hz). These findings demonstrate that musical training increased diversity of brain network states to support domain-specific music skill acquisition and music-to-language transfer effects.
APA, Harvard, Vancouver, ISO, and other styles
35

Faust, Miriam, Elisheva Ben-Artzi, and Nili Vardi. "Semantic processing in native and second language: Evidence from hemispheric differences in fine and coarse semantic coding." Brain and Language 123, no. 3 (December 2012): 228–33. http://dx.doi.org/10.1016/j.bandl.2012.09.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Wu, Jie, Guanbin Li, Si Liu, and Liang Lin. "Tree-Structured Policy Based Progressive Reinforcement Learning for Temporally Language Grounding in Video." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 12386–93. http://dx.doi.org/10.1609/aaai.v34i07.6924.

Full text
Abstract:
Temporally language grounding in untrimmed videos is a newly-raised task in video understanding. Most of the existing methods suffer from inferior efficiency, lacking interpretability, and deviating from the human perception mechanism. Inspired by human's coarse-to-fine decision-making paradigm, we formulate a novel Tree-Structured Policy based Progressive Reinforcement Learning (TSP-PRL) framework to sequentially regulate the temporal boundary by an iterative refinement process. The semantic concepts are explicitly represented as the branches in the policy, which contributes to efficiently decomposing complex policies into an interpretable primitive action. Progressive reinforcement learning provides correct credit assignment via two task-oriented rewards that encourage mutual promotion within the tree-structured policy. We extensively evaluate TSP-PRL on the Charades-STA and ActivityNet datasets, and experimental results show that TSP-PRL achieves competitive performance over existing state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
37

Sridharan, Mohan, Michael Gelfond, Shiqi Zhang, and Jeremy Wyatt. "REBA: A Refinement-Based Architecture for Knowledge Representation and Reasoning in Robotics." Journal of Artificial Intelligence Research 65 (June 17, 2019): 87–180. http://dx.doi.org/10.1613/jair.1.11524.

Full text
Abstract:
This article describes REBA, a knowledge representation and reasoning architecture for robots that is based on tightly-coupled transition diagrams of the domain at two different levels of granularity. An action language is extended to support non-boolean fluents and non-deterministic causal laws, and used to describe the domain's transition diagrams, with the fine-resolution transition diagram being defined as a refinement of the coarse-resolution transition diagram. The coarse-resolution system description, and a history that includes prioritized defaults, are translated into an Answer Set Prolog (ASP) program. For any given goal, inference in the ASP program provides a plan of abstract actions. To implement each such abstract action, the robot automatically zooms to the part of the fine-resolution transition diagram relevant to this action. The zoomed fine-resolution system description, and a probabilistic representation of the uncertainty in sensing and actuation, are used to construct a partially observable Markov decision process (POMDP). The policy obtained by solving the POMDP is invoked repeatedly to implement the abstract action as a sequence of concrete actions. The fine-resolution outcomes of executing these concrete actions are used to infer coarse-resolution outcomes that are added to the coarse-resolution history and used for subsequent coarse-resolution reasoning. The architecture thus combines the complementary strengths of declarative programming and probabilistic graphical models to represent and reason with non-monotonic logic-based and probabilistic descriptions of uncertainty and incomplete domain knowledge. In addition, we describe a general methodology for the design of software components of a robot based on these knowledge representation and reasoning tools, and provide a path for proving the correctness of these components. The architecture is evaluated in simulation and on a mobile robot finding and moving target objects to desired locations in indoor domains, to show that the architecture supports reliable and efficient reasoning with violation of defaults, noisy observations and unreliable actions, in complex domains.
APA, Harvard, Vancouver, ISO, and other styles
38

Whitehorne, Lee. "The Sweet Sounds of Syntax: Music, Language, and the Investigation of Hierarchical Processing." Arbutus Review 10, no. 1 (October 4, 2019): 36–51. http://dx.doi.org/10.18357/tar101201918926.

Full text
Abstract:
Language and music are uniquely human faculties, defined by a level of sophistication found onlyin our species. The ability to productively combine contrastive units of sound, namely words inlanguage and notes in music, underlies much of the vast communicative and expressive capacities ofthese systems. Though the intrinsic rules of syntax in language and music differ in many regards,they both lead to the construction of complex hierarchies of interconnected, functional units. Muchresearch has examined the overlap, distinction, and general neuropsychological nature of syntaxin language and music but, in comparison to the psycholinguistic study of sentence processing,musical structure has been regarded at a coarse level of detail, especially in terms of hierarchicaldependencies. The current research synthesizes recent ideas from the fields of generative music theory,linguistic syntax, and neurolinguistics to outline a more detailed, hierarchy-based methodology forinvestigating the brain’s processing of structures in music.
APA, Harvard, Vancouver, ISO, and other styles
39

Fauqueur, Julien, and Nozha Boujemaa. "Region-based image retrieval: fast coarse segmentation and fine color description." Journal of Visual Languages & Computing 15, no. 1 (February 2004): 69–95. http://dx.doi.org/10.1016/j.jvlc.2003.08.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Park, Hee-Seon, Hee-Heon Song, and Seong-Whan Lee. "A Self-Organizing Hierarchical Classifier for Multi-Lingual Large-Set Oriental Character Recognition." International Journal of Pattern Recognition and Artificial Intelligence 12, no. 02 (March 1998): 191–208. http://dx.doi.org/10.1142/s0218001498000130.

Full text
Abstract:
In this paper, we propose a practical scheme for multi-lingual, multi-font and multi-size large-set Oriental character recognition using a self-organizing hierarchical neural network classifier. In order to absorb the variation of the character shapes in multi-font and multi-size characters, a modified nonlinear shape normalization method based on dot density was introduced, and also to represent the different topological structures of multi-lingual characters effectively, a hierarchical feature extraction method was adopted. For coarse classification, a tree classifier and SOFM/LVQ based classifier which is composed of an adaptive SOFM coarse-classifier and an LVQ4 language-classifier were considered. For fine classification, a classifier based on LVQ4 learning algorithm has been developed. The experimental results revealed that the proposed scheme has the highest recognition rate of 98.27% for testing data with 7,320 kinds of multi-lingual classes and the time performance of more than 40 characters per second on 486DX-2 66MHz PC.
APA, Harvard, Vancouver, ISO, and other styles
41

Konstantinidis, George, Adriane Chapman, Mark J. Weal, Ahmed Alzubaidi, Lisa M. Ballard, and Anneke M. Lucassen. "The Need for Machine-Processable Agreements in Health Data Management." Algorithms 13, no. 4 (April 7, 2020): 87. http://dx.doi.org/10.3390/a13040087.

Full text
Abstract:
Data processing agreements in health data management are laid out by organisations in monolithic “Terms and Conditions” documents written in natural legal language. These top-down policies usually protect the interest of the service providers, rather than the data owners. They are coarse-grained and do not allow for more than a few opt-in or opt-out options for individuals to express their consent on personal data processing, and these options often do not transfer to software as they were intended to. In this paper, we study the problem of health data sharing and we advocate the need for individuals to describe their personal contract of data usage in a formal, machine-processable language. We develop an application for sharing patient genomic information and test results, and use interactions with patients and clinicians in order to identify the particular peculiarities a privacy/policy/consent language should offer in this complicated domain. We present how Semantic Web technologies can have a central role in this approach by providing the formal tools and features required in such a language. We present our ongoing approach to construct an ontology-based framework and a policy language that allows patients and clinicians to express fine-grained consent, preferences or suggestions on sharing medical information. Our language offers unique features such as multi-party ownership of data or data sharing dependencies. We evaluate the landscape of policy languages from different areas, and show how they are lacking major requirements needed in health data management. In addition to enabling patients, our approach helps organisations increase technological capabilities, abide by legal requirements, and save resources.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhu, Ping Hua, Jin Cai Feng, and Xin Jie Wang. "Fuzzy Synthesis Method for Evaluating Quality Class of Coarse Recycled Concrete Aggregate." Advanced Materials Research 250-253 (May 2011): 783–87. http://dx.doi.org/10.4028/www.scientific.net/amr.250-253.783.

Full text
Abstract:
It’s necessary evaluating for quality of coarse recycled concrete aggregate (CRA) before it is used as feasible alternative of natural coarse aggregate (NCA). The factors affecting the quality of CRA, however, being multitudinous and complicated so much, associated to regional characteristic of CRA, led to a research on evaluating quality of CRA from China. First, the CRA was partitioned into three fuzzy quality classes corresponded to high, middle, and poor, respectively, whose applicable environment action grades and projects in terms of quality classes were suggested. Second, a fuzzy synthesis method how to evaluate quality classes was proposed. The present method used six indices, i.e., attached mortar content, bulk specific density, water absorption, Los Angeles abrasion loss, chlorid content and sulfate content. The weights for six indices were obtained based on credibility matrix and frequency statistic and the memberships based on optimum-intervel-style membership function. Using fuzzy weighted average operator, judgment matrix was calculated and synthetic membership score was considered as quality class judgment standard. Last, large numbers of engineering examples were computed using a program compiled in Matlab language. Computing results show that the present method is objective and reliable.
APA, Harvard, Vancouver, ISO, and other styles
43

Fu, Yongjian, Zongchun Li, Wenqi Wang, Hua He, Feng Xiong, and Yong Deng. "Robust Coarse-to-Fine Registration Scheme for Mobile Laser Scanner Point Clouds Using Multiscale Eigenvalue Statistic-Based Descriptor." Sensors 21, no. 7 (April 1, 2021): 2431. http://dx.doi.org/10.3390/s21072431.

Full text
Abstract:
To overcome the drawbacks of pairwise registration for mobile laser scanner (MLS) point clouds, such as difficulty in searching the corresponding points and inaccuracy registration matrix, a robust coarse-to-fine registration method is proposed to align different frames of MLS point clouds into a common coordinate system. The method identifies the correct corresponding point pairs from the source and target point clouds, and then calculates the transform matrix. First, the performance of a multiscale eigenvalue statistic-based descriptor with different combinations of parameters is evaluated to identify the optimal combination. Second, based on the geometric distribution of points in the neighborhood of the keypoint, a weighted covariance matrix is constructed, by which the multiscale eigenvalues are calculated as the feature description language. Third, the corresponding points between the source and target point clouds are estimated in the feature space, and the incorrect ones are eliminated via a geometric consistency constraint. Finally, the estimated corresponding point pairs are used for coarse registration. The value of coarse registration is regarded as the initial value for the iterative closest point algorithm. Subsequently, the final fine registration result is obtained. The results of the registration experiments with Autonomous Systems Lab (ASL) Datasets show that the proposed method can accurately align MLS point clouds in different frames and outperform the comparative methods.
APA, Harvard, Vancouver, ISO, and other styles
44

Cabral, Laura, Bobby Stojanoski, and Rhodri Cusack. "Rapid and coarse face detection: With a lack of evidence for a nasal-temporal asymmetry." Attention, Perception, & Psychophysics 82, no. 4 (January 6, 2020): 1883–95. http://dx.doi.org/10.3758/s13414-019-01877-3.

Full text
Abstract:
AbstractHumans have structures dedicated to the processing of faces, which include cortical components (e.g., areas in occipital and temporal lobes) and subcortical components (e.g., superior colliculus and amygdala). Although faces are processed more quickly than stimuli from other categories, there is a lack of consensus regarding whether subcortical structures are responsible for rapid face processing. In order to probe this, we exploited the asymmetry in the strength of projections to subcortical structures between the nasal and temporal hemiretina. Participants detected faces from unrecognizable control stimuli and performed the same task for houses. In Experiments 1 and 3, at the fastest reaction times, participants detected faces more accurately than houses. However, there was no benefit of presenting to the subcortical pathway. In Experiment 2, we probed the coarseness of the rapid pathway, making the foil stimuli more similar to faces and houses. This eliminated the rapid detection advantage, suggesting that rapid face processing is limited to coarse representations. In Experiment 4, we sought to determine whether the natural difference between spatial frequencies of faces and houses were driving the effects seen in Experiments 1 and 3. We spatially filtered the faces and houses so that they were matched. Better rapid detection was again found for faces relative to houses, but we found no benefit of preferentially presenting to the subcortical pathway. Taken together, the results of our experiments suggest a coarse rapid detection mechanism, which was not dependent on spatial frequency, with no advantage for presenting preferentially to subcortical structures.
APA, Harvard, Vancouver, ISO, and other styles
45

Pan, Jennifer, and Margaret E. Roberts. "Censorship’s Effect on Incidental Exposure to Information: Evidence From Wikipedia." SAGE Open 10, no. 1 (January 2020): 215824401989406. http://dx.doi.org/10.1177/2158244019894068.

Full text
Abstract:
The fast-growing body of research on internet censorship has examined the effects of censoring selective pieces of political information and the unintended consequences of censorship of entertainment. However, we know very little about the broader consequences of coarse censorship or censorship that affects a large array of information such as an entire website or search engine. In this study, we use China’s complete block of Chinese language Wikipedia ( zh.wikipedia.org ) on May 19, 2015, to disaggregate the effects of coarse censorship on proactive consumption of information—information users seek out—and on incidental consumption of information—information users are not actively seeking but consume when they happen to come across it. We quantify the effects of censorship of Wikipedia not only on proactive information consumption but also on opportunities for exploration and incidental consumption of information. We find that users from mainland China were much more likely to consume information on Wikipedia about politics and history incidentally rather than proactively, suggesting that the effects of censorship on incidental information access may be politically significant.
APA, Harvard, Vancouver, ISO, and other styles
46

Mech, Emily N., Padmapriya Kandhadai, and Kara D. Federmeier. "The last course of coarse coding: Hemispheric similarities in associative and categorical semantic processing." Brain and Language 229 (June 2022): 105123. http://dx.doi.org/10.1016/j.bandl.2022.105123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Alfaro de Carvalho, Carolina. "Quality Standards or Censorship? Language Control Policies in Cable TV Subtitles in Brazil." Broadcasting with Intent 57, no. 2 (February 4, 2013): 464–77. http://dx.doi.org/10.7202/1013956ar.

Full text
Abstract:
This study seeks to understand the origins and reasons behind the grammar and style guidelines elaborated by Brazilian broadcasters and video producers and applied to the translated subtitles of cable television shows. The language of the translation is often controlled, and coarse or scatological vocabulary tends to be curbed or avoided, among other restrictions. Brazil was under a military regime from 1964 to 1985, when the media was subjected to strict censorship. Could it be that this heritage still casts a shadow over current policies applied to audiovisual translation (AVT)? To approach this issue, this study outlines the history of censorship applied to content and language during the Brazilian military regime, describes the evolution of the AVT industry in the context of cable television in Brazil, and finally conveys first-hand insights and experiences on language control by quality control professionals. The ultimate goal is to bring these rulemaking processes to light, in an attempt to help improve the dialogue between end clients and service providers, for the benefit of the viewers.
APA, Harvard, Vancouver, ISO, and other styles
48

Perovich, Laura J., Meryl Alper, and Corey Cleveland. ""Self-Quaranteens" Process COVID-19: Understanding Information Visualization Language in Memes." Proceedings of the ACM on Human-Computer Interaction 6, CSCW1 (March 30, 2022): 1–20. http://dx.doi.org/10.1145/3512894.

Full text
Abstract:
The COVID-19 pandemic has led to a surge of information visualizations that aim to increase our scientific understanding and communicate about the ongoing health crisis with the general public. In this time, there has also been significant use of data visualization language in artefacts from online communities that provide commentary on the pandemic and create meaning through participatory digital culture. Using a qualitative approach, this paper examines over 300 memes collected from a public social media group targeted to young adults in the United States that uses the language of data visualization to discuss topics related to COVID-19. We outline four main ways that data visualization language is used in these memes-as a coarse indicator, as a visual analogy, as an opportunity for augmentation with emotion or interpretation, and as a visual pun-as well as two ways that memes leverage traditional and emerging approaches in the information visualization community. We describe the context in which these memes are socially created and interpreted in light of the political nature of online spaces and connect this work to ongoing research on participation, emotion, and embodiment in information visualization. These results aim to start a conversation about the use of data visualization language in digital culture and more casual networked environments beyond official channels.
APA, Harvard, Vancouver, ISO, and other styles
49

Mitchell, Don C., Fernando Cuetos, Martin M. B. Corley, and Marc Brysbaert. "Exposure-based models of human parsing: Evidence for the use of coarse-grained (nonlexical) statistical records." Journal of Psycholinguistic Research 24, no. 6 (November 1995): 469–88. http://dx.doi.org/10.1007/bf02143162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Wilk, Przemysław. "Conceptual silencing as a rhetorical tool. A cognitive lexical semantics study of the lexical item Europe." "Res Rhetorica" 8, no. 1 (March 27, 2021): 124–36. http://dx.doi.org/10.29107/rr2021.1.7.

Full text
Abstract:
Taking a cognitive lexical semantics perspective, the article introduces the concept of conceptual silencing as a rhetorical tool. Understood as a process of conceptual dissolution of meaning to offer a more coarse-grained sense of an expression, conceptual silencing is demonstrated to have a potential rhetorical value in that it allows for more opaque reproduction of ideology. From a cognitive linguistic standpoint, the process of conceptual silencing hinges upon a polysemous nature of a lexical item and boils down to triggering a given sense of a given lexical item in a given context. To illustrate the workings of conceptual silencing, the article reports on a case study of the lexical item Europe in the Guardian press discourse. It is demonstrated that the ultimate effect of conceptual silencing is silencing the ‘European Union’ senses under the guise of the lexical item Europe.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography