Academic literature on the topic 'Automated language sample analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Automated language sample analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Automated language sample analysis"

1

Hallett, Terry, and James Steiger. "Automated Analysis of Spoken Language." International Journal for Innovation Education and Research 3, no. 5 (May 31, 2015): 6–13. http://dx.doi.org/10.31686/ijier.vol3.iss5.353.

Full text
Abstract:
We studied the number of words spoken by adult males versus females throughout a six-hour day and during three structured monologues. The six-hour samples were captured and analyzed using an automated speech monitoring and assessment system. The three monologues required different language tasks, and analyses of syntactic and semantic complexity were performed for each. There were no significant gender differences except during a reminiscent monologue when males spoke significantly more words and sentences than females. These results conflict with past research and popular (mis)conceptions.
APA, Harvard, Vancouver, ISO, and other styles
2

Heilmann, John, Alexander Tucci, Elena Plante, and Jon F. Miller. "Assessing Functional Language in School-Aged Children Using Language Sample Analysis." Perspectives of the ASHA Special Interest Groups 5, no. 3 (June 30, 2020): 622–36. http://dx.doi.org/10.1044/2020_persp-19-00079.

Full text
Abstract:
Purpose The goal of this clinical focus article is to illustrate how speech-language pathologists can document the functional language of school-age children using language sample analysis (LSA). Advances in computer hardware and software are detailed making LSA more accessible for clinical use. Method This clinical focus article illustrates how documenting school-age student's communicative functioning is central to comprehensive assessment and how using LSA can meet multiple needs within this assessment. LSA can document students' meaningful participation in their daily life through assessment of their language used during everyday tasks. The many advances in computerized LSA are detailed with a primary focus on the Systematic Analysis of Language Transcripts (Miller & Iglesias, 2019). The LSA process is reviewed detailing the steps necessary for computers to calculate word, morpheme, utterance, and discourse features of functional language. Conclusion These advances in computer technology and software development have made LSA clinically feasible through standardized elicitation and transcription methods that improve accuracy and repeatability. In addition to improved accuracy, validity, and reliability of LSA, databases of typical speakers to document status and automated report writing more than justify the time required. Software now provides many innovations that make LSA simpler and more accessible for clinical use. Supplemental Material https://doi.org/10.23641/asha.12456719
APA, Harvard, Vancouver, ISO, and other styles
3

Fromm, Davida, Saketh Katta, Mason Paccione, Sophia Hecht, Joel Greenhouse, Brian MacWhinney, and Tatiana T. Schnur. "A Comparison of Manual Versus Automated Quantitative Production Analysis of Connected Speech." Journal of Speech, Language, and Hearing Research 64, no. 4 (April 14, 2021): 1271–82. http://dx.doi.org/10.1044/2020_jslhr-20-00561.

Full text
Abstract:
Purpose Analysis of connected speech in the field of adult neurogenic communication disorders is essential for research and clinical purposes, yet time and expertise are often cited as limiting factors. The purpose of this project was to create and evaluate an automated program to score and compute the measures from the Quantitative Production Analysis (QPA), an objective and systematic approach for measuring morphological and structural features of connected speech. Method The QPA was used to analyze transcripts of Cinderella stories from 109 individuals with acute–subacute left hemisphere stroke. Regression slopes and residuals were used to compare the results of manual scoring and automated scoring using the newly developed C-QPA command in CLAN, a set of programs for automatic analysis of language samples. Results The C-QPA command produced two spreadsheet outputs: an analysis spreadsheet with scores for each utterance in the language sample, and a summary spreadsheet with 18 score totals from the analysis spreadsheet and an additional 15 measures derived from those totals. Linear regression analysis revealed that 32 of the 33 measures had good agreement; auxiliary complexity index was the one score that did not have good agreement. Conclusions The C-QPA command can be used to perform automated analyses of language transcripts, saving time and training and providing reliable and valid quantification of connected speech. Transcribing in CHAT, the CLAN editor, also streamlined the process of transcript preparation for QPA and allowed for precise linking of media files to language transcripts for temporal analyses.
APA, Harvard, Vancouver, ISO, and other styles
4

Hsu, Chien-Ju, and Cynthia K. Thompson. "Manual Versus Automated Narrative Analysis of Agrammatic Production Patterns: The Northwestern Narrative Language Analysis and Computerized Language Analysis." Journal of Speech, Language, and Hearing Research 61, no. 2 (February 15, 2018): 373–85. http://dx.doi.org/10.1044/2017_jslhr-l-17-0185.

Full text
Abstract:
Purpose The purpose of this study is to compare the outcomes of the manually coded Northwestern Narrative Language Analysis (NNLA) system, which was developed for characterizing agrammatic production patterns, and the automated Computerized Language Analysis (CLAN) system, which has recently been adopted to analyze speech samples of individuals with aphasia (a) for reliability purposes to ascertain whether they yield similar results and (b) to evaluate CLAN for its ability to automatically identify language variables important for detailing agrammatic production patterns. Method The same set of Cinderella narrative samples from 8 participants with a clinical diagnosis of agrammatic aphasia and 10 cognitively healthy control participants were transcribed and coded using NNLA and CLAN. Both coding systems were utilized to quantify and characterize speech production patterns across several microsyntactic levels: utterance, sentence, lexical, morphological, and verb argument structure levels. Agreement between the 2 coding systems was computed for variables coded by both. Results Comparison of the 2 systems revealed high agreement for most, but not all, lexical-level and morphological-level variables. However, NNLA elucidated utterance-level, sentence-level, and verb argument structure–level impairments, important for assessment and treatment of agrammatism, which are not automatically coded by CLAN. Conclusions CLAN automatically and reliably codes most lexical and morphological variables but does not automatically quantify variables important for detailing production deficits in agrammatic aphasia, although conventions for manually coding some of these variables in Codes for the Human Analysis of Transcripts are possible. Suggestions for combining automated programs and manual coding to capture these variables or revising CLAN to automate coding of these variables are discussed.
APA, Harvard, Vancouver, ISO, and other styles
5

Rysová, Kateřina, Magdaléna Rysová, Michal Novák, Jiří Mírovský, and Eva Hajičová. "EVALD – a Pioneer Application for Automated Essay Scoring in Czech." Prague Bulletin of Mathematical Linguistics 113, no. 1 (October 1, 2019): 9–30. http://dx.doi.org/10.2478/pralin-2019-0004.

Full text
Abstract:
Abstract In the paper, we present EVALD applications (Evaluator of Discourse) for automated essay scoring. EVALD is the first tool of this type for Czech. It evaluates texts written by both native and non-native speakers of Czech. We describe first the history and the present in the automatic essay scoring, which is illustrated by examples of systems for other languages, mainly for English. Then we focus on the methodology of creating the EVALD applications and describe datasets used for testing as well as supervised training that EVALD builds on. Furthermore, we analyze in detail a sample of newly acquired language data – texts written by non-native speakers reaching the threshold level of the Czech language acquisition required e.g. for the permanent residence in the Czech Republic – and we focus on linguistic differences between the available text levels. We present the feature set used by EVALD and – based on the analysis – we extend it with new spelling features. Finally, we evaluate the overall performance of various variants of EVALD and provide the analysis of collected results.
APA, Harvard, Vancouver, ISO, and other styles
6

Egnoto, Michael J., and Darrin J. Griffin. "Analyzing Language in Suicide Notes and Legacy Tokens." Crisis 37, no. 2 (March 2016): 140–47. http://dx.doi.org/10.1027/0227-5910/a000363.

Full text
Abstract:
Abstract. Background: Identifying precursors that will aid in the discovery of individuals who may harm themselves or others has long been a focus of scholarly research. Aim: This work set out to determine if it is possible to use the legacy tokens of active shooters and notes left from individuals who completed suicide to uncover signals that foreshadow their behavior. Method: A total of 25 suicide notes and 21 legacy tokens were compared with a sample of over 20,000 student writings for a preliminary computer-assisted text analysis to determine what differences can be coded with existing computer software to better identify students who may commit self-harm or harm to others. Results: The results support that text analysis techniques with the Linguistic Inquiry and Word Count (LIWC) tool are effective for identifying suicidal or homicidal writings as distinct from each other and from a variety of student writings in an automated fashion. Conclusion: Findings indicate support for automated identification of writings that were associated with harm to self, harm to others, and various other student writing products. This work begins to uncover the viability or larger scale, low cost methods of automatic detection for individuals suffering from harmful ideation.
APA, Harvard, Vancouver, ISO, and other styles
7

Jiao, Yishan, Amy LaCross, Visar Berisha, and Julie Liss. "Objective Intelligibility Assessment by Automated Segmental and Suprasegmental Listening Error Analysis." Journal of Speech, Language, and Hearing Research 62, no. 9 (September 20, 2019): 3359–66. http://dx.doi.org/10.1044/2019_jslhr-s-19-0119.

Full text
Abstract:
Purpose Subjective speech intelligibility assessment is often preferred over more objective approaches that rely on transcript scoring. This is, in part, because of the intensive manual labor associated with extracting objective metrics from transcribed speech. In this study, we propose an automated approach for scoring transcripts that provides a holistic and objective representation of intelligibility degradation stemming from both segmental and suprasegmental contributions, and that corresponds with human perception. Method Phrases produced by 73 speakers with dysarthria were orthographically transcribed by 819 listeners via Mechanical Turk, resulting in 63,840 phrase transcriptions. A protocol was developed to filter the transcripts, which were then automatically analyzed using novel algorithms developed for measuring phoneme and lexical segmentation errors. The results were compared with manual labels on a randomly selected sample set of 40 transcribed phrases to assess validity. A linear regression analysis was conducted to examine how well the automated metrics predict a perceptual rating of severity and word accuracy. Results On the sample set, the automated metrics achieved 0.90 correlation coefficients with manual labels on measuring phoneme errors, and 100% accuracy on identifying and coding lexical segmentation errors. Linear regression models found that the estimated metrics could predict a significant portion of the variance in perceptual severity and word accuracy. Conclusions The results show the promising development of an objective speech intelligibility assessment that identifies intelligibility degradation on multiple levels of analysis.
APA, Harvard, Vancouver, ISO, and other styles
8

Vokes, Martha S., and Anne E. Carpenter. "CellProfiler: Open-Source Software to Automatically Quantify Images." Microscopy Today 16, no. 5 (September 2008): 38–39. http://dx.doi.org/10.1017/s1551929500061757.

Full text
Abstract:
Researchers often examine samples by eye on the microscope — qualitatively scoring each sample for a particular feature of interest. This approach, while suitable for many experiments, sacrifices quantitative results and a permanent record of the experiment. By contrast, if digital images are collected of each sample, software can be used to quantify features of interest. For small experiments, quantitative analysis is often done manually using interactive programs like Adobe Photoshop©. For the large number of images that can be easily collected with automated microscopes, this approach is tedious and time-consuming. NIH Image/ImageJ (http://rsb.info.nih.gov/ij) allows users comfortable writing in a macro language to automate quantitative image analysis. We have developed Cell- Profiler, a free, open-source software package, designed to enable scientists without prior programming experience to quantify relevant features of samples in large numbers of images automatically, in a modular system suitable for processing hundreds of thousands of images.
APA, Harvard, Vancouver, ISO, and other styles
9

Fromm, Davida, Joel Greenhouse, Kaiyue Hou, G. Austin Russell, Xizhen Cai, Margaret Forbes, Audrey Holland, and Brian MacWhinney. "Automated Proposition Density Analysis for Discourse in Aphasia." Journal of Speech, Language, and Hearing Research 59, no. 5 (October 2016): 1123–32. http://dx.doi.org/10.1044/2016_jslhr-l-15-0401.

Full text
Abstract:
Purpose This study evaluates how proposition density can differentiate between persons with aphasia (PWA) and individuals in a control group, as well as among subtypes of aphasia, on the basis of procedural discourse and personal narratives collected from large samples of participants. Method Participants were 195 PWA and 168 individuals in a control group from the AphasiaBank database. PWA represented 6 aphasia types on the basis of the Western Aphasia Battery–Revised (Kertesz, 2006). Narrative samples were stroke stories for PWA and illness or injury stories for individuals in the control group. Procedural samples were from the peanut-butter-and-jelly-sandwich task. Language samples were transcribed using Codes for the Human Analysis of Transcripts (MacWhinney, 2000) and analyzed using Computerized Language Analysis (MacWhinney, 2000), which automatically computes proposition density (PD) using rules developed for automatic PD measurement by the Computerized Propositional Idea Density Rater program (Brown, Snodgrass, & Covington, 2007; Covington, 2007). Results Participants in the control group scored significantly higher than PWA on both tasks. PD scores were significantly different among the aphasia types for both tasks. Pairwise comparisons for both discourse tasks revealed that PD scores for the Broca's group were significantly lower than those for all groups except Transcortical Motor. No significant quadratic or linear association between PD and severity was found. Conclusion Proposition density is differentially sensitive to aphasia type and most clearly differentiates individuals with Broca's aphasia from the other groups.
APA, Harvard, Vancouver, ISO, and other styles
10

Fromm, Davida, Brian MacWhinney, and Cynthia K. Thompson. "Automation of the Northwestern Narrative Language Analysis System." Journal of Speech, Language, and Hearing Research 63, no. 6 (June 22, 2020): 1835–44. http://dx.doi.org/10.1044/2020_jslhr-19-00267.

Full text
Abstract:
Purpose Analysis of spontaneous speech samples is important for determining patterns of language production in people with aphasia. To accomplish this, researchers and clinicians can use either hand coding or computer-automated methods. In a comparison of the two methods using the hand-coding NNLA (Northwestern Narrative Language Analysis) and automatic transcript analysis by CLAN (Computerized Language Analysis), Hsu and Thompson (2018) found good agreement for 32 of 51 linguistic variables. The comparison showed little difference between the two methods for coding most general (i.e., utterance length, rate of speech production), lexical, and morphological measures. However, the NNLA system coded grammatical measures (i.e., sentence and verb argument structure) that CLAN did not. Because of the importance of quantifying these aspects of language, the current study sought to implement a new, single, composite CLAN command for the full set of 51 NNLA codes and to evaluate its reliability for coding aphasic language samples. Method Eighteen manually coded NNLA transcripts from eight people with aphasia and 10 controls were converted into CHAT (Codes for the Human Analysis of Talk) files for compatibility with CLAN commands. Rules from the NNLA manual were translated into programmed rules for CLAN computation of lexical, morphological, utterance-level, sentence-level, and verb argument structure measures. Results The new C-NNLA (CLAN command to compute the full set of NNLA measures) program automatically computes 50 of the 51 NNLA measures and generates the results in a summary spreadsheet. The only measure it does not compute is the number of verb particles. Statistical tests revealed no significant difference between C-NNLA results and those generated by manual coding for 44 of the 50 measures. C-NNLA results were not comparable to manual coding for the six verb argument measures. Conclusion Clinicians and researchers can use the automatic C-NNLA to analyze important variables required for quantification of grammatical deficits in aphasia in a way that is fast, replicable, and accessible without extensive linguistic knowledge and training.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Automated language sample analysis"

1

Manning, Britney Richey. "Automated Identification of Noun Clauses in Clinical Language Samples." BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/2197.

Full text
Abstract:
The identification of complex grammatical structures including noun clauses is of clinical importance because differences in the use of these structures have been found between individuals with and without language impairment. In recent years, computer software has been used to assist in analyzing clinical language samples. However, this software has been unable to accurately identify complex syntactic structures such as noun clauses. The present study investigated the accuracy of new software, called Cx, in identifying finite wh- and that-noun clauses. Two sets of language samples were used. One set included 10 children with language impairment, 10 age-matched peers, and 10 language-matched peers. The second set included 40 adults with mental retardation. Levels of agreement between computerized and manual analysis were similar for both sets of language samples; Kappa levels were high for wh-noun clauses and very low for that-noun clauses.
APA, Harvard, Vancouver, ISO, and other styles
2

Clark, Jessica Celeste. "Automated Identification of Adverbial Clauses in Child Language Samples." Diss., CLICK HERE for online access, 2009. http://contentdm.lib.byu.edu/ETD/image/etd2803.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Winiecke, Rachel Christine. "Precoding and the Accuracy of Automated Analysis of Child Language Samples." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5867.

Full text
Abstract:
Language sample analysis is accepted as the gold standard in child language assessment. Unfortunately it is often viewed as too time consuming for the practicing clinician. Over the last 15 years a great deal of research has been invested in the automated analysis of child language samples to make the process more time efficient. One step in the analysis process may be precoding the sample, as is used in the Systematic Analysis of Language Transcripts (SALT) software. However, a claim has been made (MacWhinney, 2008) that such precoding in fact leads to lower accuracy because of manual coding errors. No data on this issue have been published. The current research measured the accuracy of language samples analyzed with and without SALT precoding. This study also compared the accuracy of current software to an older version called GramCats (Channell & Johnson 1999). The results presented support the use of precoding schemes such as SALT and suggest that the accuracy of automated analysis has improved over time.
APA, Harvard, Vancouver, ISO, and other styles
4

Minch, Stacy Lynn. "Validity of Seven Syntactic Analyses Performed by the Computerized Profiling Software." Diss., CLICK HERE for online access, 2009. http://contentdm.lib.byu.edu/ETD/image/etd2956.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chamberlain, Laurie Lynne. "Mean Length of Utterance and Developmental Sentence Scoring in the Analysis of Children's Language Samples." BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/5966.

Full text
Abstract:
Developmental Sentence Scoring (DSS) is a standardized language sample analysis procedure that uses complete sentences to evaluate and score a child’s use of standard American-English grammatical rules. Automated DSS software can potentially increase efficiency and decrease the time needed for DSS analysis. This study examines the accuracy of one automated DSS software program, DSSA Version 2.0, compared to manual DSS scoring on previously collected language samples from 30 children between the ages of 2;5 and 7;11 (years;months). The overall accuracy of DSSA 2.0 was 86%. Additionally, the present study sought to determine the relationship between DSS, DSSA Version 2.0, the mean length of utterance (MLU), and age. MLU is a measure of linguistic ability in children, and is a widely used indicator of language impairment. This study found that MLU and DSS are both strongly correlated with age and these correlations are statistically significant, r = .605, p < .001 and r = .723, p < .001, respectively. In addition, MLU and DSSA were also strongly correlated with age and these correlations were statistically significant, r = .605, p < .001 and r = .669, p < .001, respectively. The correlation between MLU and DSS was high and statistically significant r = .873, p < .001, indicating that the correlation between MLU and DSS is not simply an artifact of both measures being correlated with age. Furthermore, the correlation between MLU and DSSA was high, r = .794, suggesting that the correlation between MLU and DSSA is not simply an artifact of both variables being correlated with age. Lastly, the relationship between DSS and age while controlling for MLU was moderate, but still statistically significant r = .501, p = .006. Therefore, DSS appears to add information beyond MLU.
APA, Harvard, Vancouver, ISO, and other styles
6

Michaelis, Hali Anne. "Automated Identification of Relative Clauses in Child Language Samples." BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/1997.

Full text
Abstract:
Previously existing computer analysis programs have been unable to correctly identify many complex syntactic structures thus requiring further manual analysis by the clinician. Complex structures, including the relative clause, are of interest in child language samples due to the difference in development between children with and without language impairment. The purpose of this study was to assess the comparability of results from a new automated program, Cx, to results from manual identification of relative clauses. On language samples from 10 children with language impairment (LI), 10 language matched peers (LA), and 10 chronologically age matched peers (CA), a computerized analysis based on probabilities of sequences of grammatical markers agreed with a manual analysis with a Kappa of 0.88.
APA, Harvard, Vancouver, ISO, and other styles
7

Ehlert, Erika E. "Automated Identification of Relative Clauses in Child Language Samples." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3615.

Full text
Abstract:
Relative clauses are grammatical constructions that are of relevance in both typical and impaired language development. Thus, the accurate identification of these structures in child language samples is clinically important. In recent years, computer software has been used to assist in the automated analysis of clinical language samples. However, this software has had only limited success when attempting to identify relative clauses. The present study explores the development and clinical importance of relative clauses and investigates the accuracy of the software used for automated identification of these structures. Two separate collections of language samples were used. The first collection included 10 children with language impairment, ranging in age from 7;6 to 11;1 (years;months), 10 age-matched peers, and 10 language-matched peers. A second collection contained 30 children considered to have typical speech and language skills and who ranged in age from 2;6 to 7;11. Language samples were manually coded for the presence of relative clauses (including those containing a relative pronoun, those without a relative pronoun and reduced relative clauses). These samples were then tagged using computer software and finally tabulated and compared for accuracy. ANACOVA revealed a significant difference in the frequency of relative clauses containing a relative pronoun but not for those without a relative pronoun nor for reduce relative clauses. None of the structures were significantly correlated with age; however, frequencies of both relative clauses with and without relative pronouns were correlated with mean length of utterance. Kappa levels revealed that agreement between manual and automated coding was relatively high for each relative clause type and highest for relative clauses containing relative pronouns.
APA, Harvard, Vancouver, ISO, and other styles
8

Janis, Sarah Elizabeth. "A Comparison of Manual and Automated Grammatical Precoding on the Accuracy of Automated Developmental Sentence Scoring." BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/5892.

Full text
Abstract:
Developmental Sentence Scoring (DSS) is a standardized language sample analysis procedure that evaluates and scores a child's use of standard American-English grammatical rules within complete sentences. Automated DSS programs have the potential to increase the efficiency and reduce the amount of time required for DSS analysis. The present study examines the accuracy of one automated DSS software program, DSSA 2.0, compared to manual DSS scoring on previously collected language samples from 30 children between the ages of 2-5 and 7-11. Additionally, this study seeks to determine the source of error in the automated score by comparing DSSA 2.0 analysis given manually versus automatedly assigned grammatical tag input. The overall accuracy of DSSA 2.0 was 86%; the accuracy of individual grammatical category-point value scores varied greatly. No statistically significant difference was found between the two DSSA 2.0 input conditions (manual vs. automated tags) suggesting that the underlying grammatical tagging is not the primary source of error in DSSA 2.0 analysis.
APA, Harvard, Vancouver, ISO, and other styles
9

Redd, Nicole. "Automated Grammatical Analysis of Language Samples from Spanish-Speaking Children Learning English." BYU ScholarsArchive, 2006. https://scholarsarchive.byu.edu/etd/410.

Full text
Abstract:
Research has demonstrated that automated grammatical tagging is fast and accurate for both English and Spanish child language, but there has been no research done regarding its accuracy with bilingual children. The present study examined this topic using English and Spanish language samples taken from 254 children living in the United States. The subjects included school-aged children enrolled in public schools in the United States in grades 2, 3, or 5. The present study found high automated grammatical tagging accuracy scores for both English (M = 96.4%) and Spanish (M = 96.8%). The study suggests that automated grammatical analysis has potential to be a valuable tool for clinicians in the analysis of the language of bilingual children.
APA, Harvard, Vancouver, ISO, and other styles
10

Millet, Deborah. "Automated Grammatical Tagging of Language Samples from Children with and without Language Impairment." BYU ScholarsArchive, 2003. https://scholarsarchive.byu.edu/etd/1139.

Full text
Abstract:
Grammatical classification ("tagging") of words in language samples is a component of syntactic analysis for both clinical and research purposes. Previous studies have shown that probability-based software can be used to tag samples from adults and typically-developing children with high (about 95%) accuracy. The present study found that similar accuracy can be obtained in tagging samples from school-aged children with and without language impairment if the software uses tri-gram rather than bi-gram probabilities and large corpora are used to obtain probability information to train the tagging software.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Automated language sample analysis"

1

Strycharz, Theodore M. Analysis of Defense Language Institute automated student questionnaire data. Monterey, Calif: Naval Postgraduate School, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hickey, Raymond. Corpus presenter: Software for language analysis with a manual and "A corpus of Irish English" as sample data. Philadelphia: John Benjamins Pub., 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Grover, Herbert J., Barbara J. Leadholm, and Jon F. Miller. Language Sample Analysis: The Wisconsin Guide. Wisconsin Department of Public Instruction, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

J, Leadholm Barbara, ed. Language sample analysis: The Wisconsin guide. Madison, WI (P.O. Box 7841, Madison 53707-7841): Wisconsin Dept. of Public Instruction, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Spectral analysis for automated exploration and sample acquisition. Pasadena, Calif: National Aeronautics and Space Administration, Jet Propulsion Laboratory, California Institute of Technology, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Spectral analysis for automated exploration and sample acquisition. Pasadena, Calif: National Aeronautics and Space Administration, Jet Propulsion Laboratory, California Institute of Technology, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

(Editor), Li Zhu, ed. Error Analysis of 900 Sample Sentences (Intermediate Level). 2nd ed. Sinolingua, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Meizhen, Cheng, and Li Zhu, eds. Han yu bing ju bian xi jiu bai li: Error analysis of 900 sample sentences -- for Chinese learners from English speaking countries. Beijing: Hua yu jiao xue chu ban she, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Grishman, Ralph. Information Extraction. Edited by Ruslan Mitkov. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780199276349.013.0030.

Full text
Abstract:
Information extraction (IE) is the automatic identification of selected types of entities, relations, or events in free text. This article appraises two specific strands of IE — name identification and classification, and event extraction. Conventional treatment of languages pays little attention to proper names, addresses etc. Presentations of language analysis generally look up words in a dictionary and identify them as nouns etc. The incessant presence of names in a text, makes linguistic analysis of the same difficult, in the absence of the names being identified by their types and as linguistic units. Name tagging involves creating, several finite-state patterns, each corresponding to some noun subset. Elements of the patterns would match specific/classes of tokens with particular features. Event extraction typically works by creating a series of regular expressions, customized to capture the relevant events. Enhancement of each expression is corresponded by a relevant, suitable enhancement in the event patterns.
APA, Harvard, Vancouver, ISO, and other styles
10

André, Elisabeth. Natural Language in Multimodal and Multimedia Systems. Edited by Ruslan Mitkov. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780199276349.013.0036.

Full text
Abstract:
Recent years have witnessed a rapid growth in the development of multimedia applications. Improving technology and tools enable the creation of large multimedia archives and the development of completely new styles of interaction. This article provides a survey of multimedia applications in which natural language plays a significant role. Conventional multimodal systems usually do not maintain explicit representations of the user's input and handle mode integration only in elementary manner. This article shows how the generalization of techniques and representation formalisms developed for the analysis of natural language can help to overcome some of these problems. It surveys techniques for building automated multimedia presentation systems drawing upon lessons learned during the development of natural language generators. Finally, it argues that the integration of natural language technology can lead to a qualitative improvement of existing methods for document classification and analysis.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Automated language sample analysis"

1

Angluin, Dana, Dana Fisman, and Yaara Shoval. "Polynomial Identification of $$\omega $$-Automata." In Tools and Algorithms for the Construction and Analysis of Systems, 325–43. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45237-7_20.

Full text
Abstract:
Abstract We study identification in the limit using polynomial time and data for models of $$\omega $$-automata. On the negative side we show that non-deterministic $$\omega $$-automata (of types Büchi, coBüchi, Parity or Muller) can not be polynomially learned in the limit. On the positive side we show that the $$\omega $$-language classes $$\mathbb {IB}$$, $$\mathbb {IC}$$, $$\mathbb {IP}$$, and $$\mathbb {IM}$$ that are defined by deterministic Büchi, coBüchi, parity, and Muller acceptors that are isomorphic to their right-congruence automata (that is, the right congruences of languages in these classes are fully informative) are identifiable in the limit using polynomial time and data. We further show that for these classes a characteristic sample can be constructed in polynomial time.
APA, Harvard, Vancouver, ISO, and other styles
2

Picot, V. S., and R. D. McDowall. "Experiences with Automated Sample Preparation in Bioanalysis." In Sample Preparation for Biomedical and Environmental Analysis, 61–69. Boston, MA: Springer US, 1994. http://dx.doi.org/10.1007/978-1-4899-1328-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Meseguer, José, and Grigore Roşu. "Rewriting Logic Semantics: From Language Specifications to Formal Analysis Tools." In Automated Reasoning, 1–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-25984-8_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chapman, Martin, Hana Chockler, Pascal Kesseli, Daniel Kroening, Ofer Strichman, and Michael Tautschnig. "Learning the Language of Error." In Automated Technology for Verification and Analysis, 114–30. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24953-7_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Avellaneda, Florent, Silvano Dal Zilio, and Jean-Baptiste Raclet. "Solving Language Equations Using Flanked Automata." In Automated Technology for Verification and Analysis, 106–21. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46520-3_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ganty, Pierre, Boris Köpf, and Pedro Valero. "A Language-Theoretic View on Network Protocols." In Automated Technology for Verification and Analysis, 363–79. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68167-2_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Briggs, R. J., and D. Stevenson. "A Note on Sampling and Analysis of Volatile Organics Using Automated Thermal Desorption." In Sample Preparation for Biomedical and Environmental Analysis, 211–17. Boston, MA: Springer US, 1994. http://dx.doi.org/10.1007/978-1-4899-1328-9_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Taylor, M. T., P. Belgrader, R. Joshi, G. A. Kintz, and M. A. Northrup. "Fully Automated Sample Preparation for Pathogen Detection Performed in a Microfluidic Cassette." In Micro Total Analysis Systems 2001, 670–72. Dordrecht: Springer Netherlands, 2001. http://dx.doi.org/10.1007/978-94-010-1015-3_292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Skórzewski, Paweł, Krzysztof Jassem, and Filip Graliński. "Automated Normalization and Analysis of Historical Texts." In Human Language Technology. Challenges for Computer Science and Linguistics, 73–86. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-66527-2_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Macher, Georg, Omar Veledar, Markus Bachinger, Andreas Kager, Michael Stolz, and Christian Kreiner. "Integration Analysis of a Transmission Unit for Automated Driving Vehicles." In Developments in Language Theory, 290–301. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99229-7_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Automated language sample analysis"

1

Gorman, Kyle, Steven Bedrick, Geza Kiss, Eric Morley, Rosemary Ingham, Metrah Mohammed, Katina Papadakis, and Jan van Santen. "Automated morphological analysis of clinical language samples." In Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality. Stroudsburg, PA, USA: Association for Computational Linguistics, 2015. http://dx.doi.org/10.3115/v1/w15-1213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Krout, S., and I. DeGraff. "369. Automated Sample Prep for Sorbent Tube Analysis." In AIHce 2000. AIHA, 2000. http://dx.doi.org/10.3320/1.2763719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Archer, M., J. S. Erickson, L. R. Hilliard, P. B. Howell, Jr., D. A. Stenger, F. S. Ligler, and B. Lin. "Components for automated microfluidics sample preparation and analysis." In Integrated Optoelectronic Devices 2008, edited by Joel A. Kubby and Graham T. Reed. SPIE, 2008. http://dx.doi.org/10.1117/12.766492.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ballacey, H., A. Al-Ibadi, G. Macgrogan, J. P. Guillet, E. MacPherson, and P. Mounaix. "Automated data and image processing for biomedical sample analysis." In 2016 41st International Conference on Infrared, Millimeter, and Terahertz waves (IRMMW-THz). IEEE, 2016. http://dx.doi.org/10.1109/irmmw-thz.2016.7758882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jafari, Bahman, Ali Khaloo, and David Lattanzi. "Tracking Structural Deformations via Automated Sample-Based Point Cloud Analysis." In ASCE International Workshop on Computing in Civil Engineering 2017. Reston, VA: American Society of Civil Engineers, 2017. http://dx.doi.org/10.1061/9780784480823.048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Camburn, Bradley, Yuejun He, Sujithra Raviselvam, Jianxi Luo, and Kristin Wood. "Evaluating Crowdsourced Design Concepts With Machine Learning." In ASME 2019 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/detc2019-97285.

Full text
Abstract:
Abstract Automation has enabled design of increasingly complex products, services, and systems. Advanced technology enables designers to automate repetitive tasks in earlier design phases, even high level conceptual ideation. One particularly repetitive task in ideation is to process the large concept sets that can be developed through crowdsourcing. This paper introduces a method for filtering, categorizing, and rating large sets of design concepts. It leverages unsupervised machine learning (ML) trained on open source databases. Input design concepts are written in natural language. The concepts are not pre-tagged, structured or processed in any way which requires human intervention. Nor does the approach require dedicated training on a sample set of designs. Concepts are assessed at the sentence level via a mixture of named entity tagging (keywords) through contextual sense recognition and topic tagging (sentence topic) through probabilistic mapping to a knowledge graph. The method also includes a filtering strategy, the introduction of two metrics, and a selection strategy for assessing design concepts. The metrics are analogous to the design creativity metrics novelty, level of detail, and a selection strategy. To test the method, four ideation cases were studied; over 4,000 concepts were generated and evaluated. Analyses include: asymptotic convergence analysis; a predictive industry case study; and a dominance test between several approaches to selection of high ranking concepts. Notably, in a series of binary comparisons between concepts that were selected from the entire set by a time limited human versus those with the highest ML metric scores, the ML selected concepts were dominant.
APA, Harvard, Vancouver, ISO, and other styles
7

Gourishankar, Yamini, and Frank Weisgerber. "Automation of Wind Pressure Computation Using a Data Management Approach." In ASME 1993 International Computers in Engineering Conference and Exposition. American Society of Mechanical Engineers, 1993. http://dx.doi.org/10.1115/edm1993-0107.

Full text
Abstract:
Abstract It is observed that calculating the wind pressures on structures involves more data retrieval from the ASCE standard than any subjective reasoning on the designer’s part. Once the initial design requirements are established, the procedure involved in the computation is straightforward. This paper discusses an approach to automate the process associated with wind pressure computation on one story and multi-story buildings using a data management strategy (implemented using the ORACLE database management system). In the prototype system developed herein, the designer supplies the design requirements in the form of the structure’s exposure type, its dimensions and the nature of occupancy of the structure. Using these requirements, the program retrieves the necessary standards data from an independently maintained database, and computes the wind pressures. The final output contains the wind pressures on the main wind force resisting system, and on the components and claddings, for wind blowing parallel and perpendicular to the ridge. The knowledge encoded in the system was gained from ASCE codes, design guidelines and as a result of interviews with various experts and practitioners. Several information modeling methodologies such as the entity relationship model, IDEF 1X, etc. were employed in the system analysis and design phase of this project. The prototype is implemented on an IBM PC using the ORACLE DBMS and the ‘C’ programming language. Appendix A illustrates a sample run.
APA, Harvard, Vancouver, ISO, and other styles
8

Gruzman, Igor S., and Lubov N. Pelepenko. "A development of algorithms for an automated blood sample image analysis." In 2016 13th International Scientific-Technical Conference on Actual Problems of Electronics Instrument Engineering (APEIE). IEEE, 2016. http://dx.doi.org/10.1109/apeie.2016.7802204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sebastian, Y., Brian C. S. Loh, and Patrick H. H. Then. "Towards natural language interface framework for automated medical analysis." In 2009 2nd IEEE International Conference on Computer Science and Information Technology. IEEE, 2009. http://dx.doi.org/10.1109/iccsit.2009.5234782.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pagé-Perron, Émilie, Maria Sukhareva, Ilya Khait, and Christian Chiarcos. "Machine Translation and Automated Analysis of the Sumerian Language." In Proceedings of the Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature. Stroudsburg, PA, USA: Association for Computational Linguistics, 2017. http://dx.doi.org/10.18653/v1/w17-2202.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Automated language sample analysis"

1

Pin, F. G., E. M. Oblow, and R. Q. Wright. Automated sensitivity analysis using the GRESS language. Office of Scientific and Technical Information (OSTI), April 1986. http://dx.doi.org/10.2172/6022495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Stills, Morgan. Language Sample Length Effects on Various Lexical Diversity Measures: An Analysis of Spanish Language Samples from Children. Portland State University Library, January 2016. http://dx.doi.org/10.15760/honors.250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Salter, R., Quyen Dong, Cody Coleman, Maria Seale, Alicia Ruvinsky, LaKenya Walker, and W. Bond. Data Lake Ecosystem Workflow. Engineer Research and Development Center (U.S.), April 2021. http://dx.doi.org/10.21079/11681/40203.

Full text
Abstract:
The Engineer Research and Development Center, Information Technology Laboratory’s (ERDC-ITL’s) Big Data Analytics team specializes in the analysis of large-scale datasets with capabilities across four research areas that require vast amounts of data to inform and drive analysis: large-scale data governance, deep learning and machine learning, natural language processing, and automated data labeling. Unfortunately, data transfer between government organizations is a complex and time-consuming process requiring coordination of multiple parties across multiple offices and organizations. Past successes in large-scale data analytics have placed a significant demand on ERDC-ITL researchers, highlighting that few individuals fully understand how to successfully transfer data between government organizations; future project success therefore depends on a small group of individuals to efficiently execute a complicated process. The Big Data Analytics team set out to develop a standardized workflow for the transfer of large-scale datasets to ERDC-ITL, in part to educate peers and future collaborators on the process required to transfer datasets between government organizations. Researchers also aim to increase workflow efficiency while protecting data integrity. This report provides an overview of the created Data Lake Ecosystem Workflow by focusing on the six phases required to efficiently transfer large datasets to supercomputing resources located at ERDC-ITL.
APA, Harvard, Vancouver, ISO, and other styles
4

de Caritat, Patrice, Brent McInnes, and Stephen Rowins. Towards a heavy mineral map of the Australian continent: a feasibility study. Geoscience Australia, 2020. http://dx.doi.org/10.11636/record.2020.031.

Full text
Abstract:
Heavy minerals (HMs) are minerals with a specific gravity greater than 2.9 g/cm3. They are commonly highly resistant to physical and chemical weathering, and therefore persist in sediments as lasting indicators of the (former) presence of the rocks they formed in. The presence/absence of certain HMs, their associations with other HMs, their concentration levels, and the geochemical patterns they form in maps or 3D models can be indicative of geological processes that contributed to their formation. Furthermore trace element and isotopic analyses of HMs have been used to vector to mineralisation or constrain timing of geological processes. The positive role of HMs in mineral exploration is well established in other countries, but comparatively little understood in Australia. Here we present the results of a pilot project that was designed to establish, test and assess a workflow to produce a HM map (or atlas of maps) and dataset for Australia. This would represent a critical step in the ability to detect anomalous HM patterns as it would establish the background HM characteristics (i.e., unrelated to mineralisation). Further the extremely rich dataset produced would be a valuable input into any future machine learning/big data-based prospectivity analysis. The pilot project consisted in selecting ten sites from the National Geochemical Survey of Australia (NGSA) and separating and analysing the HM contents from the 75-430 µm grain-size fraction of the top (0-10 cm depth) sediment samples. A workflow was established and tested based on the density separation of the HM-rich phase by combining a shake table and the use of dense liquids. The automated mineralogy quantification was performed on a TESCAN® Integrated Mineral Analyser (TIMA) that identified and mapped thousands of grains in a matter of minutes for each sample. The results indicated that: (1) the NGSA samples are appropriate for HM analysis; (2) over 40 HMs were effectively identified and quantified using TIMA automated quantitative mineralogy; (3) the resultant HMs’ mineralogy is consistent with the samples’ bulk geochemistry and regional geological setting; and (4) the HM makeup of the NGSA samples varied across the country, as shown by the mineral mounts and preliminary maps. Based on these observations, HM mapping of the continent using NGSA samples will likely result in coherent and interpretable geological patterns relating to bedrock lithology, metamorphic grade, degree of alteration and mineralisation. It could assist in geological investigations especially where outcrop is minimal, challenging to correctly attribute due to extensive weathering, or simply difficult to access. It is believed that a continental-scale HM atlas for Australia could assist in derisking mineral exploration and lead to investment, e.g., via tenement uptake, exploration, discovery and ultimately exploitation. As some HMs are hosts for technology critical elements such as rare earth elements, their systematic and internally consistent quantification and mapping could lead to resource discovery essential for a more sustainable, lower-carbon economy.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography