To see the other types of publications on this topic, follow the link: Automated Developmental Sentence Scoring.

Journal articles on the topic 'Automated Developmental Sentence Scoring'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Automated Developmental Sentence Scoring.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Channell, Ron W. "Automated Developmental Sentence Scoring Using Computerized Profiling Software." American Journal of Speech-Language Pathology 12, no. 3 (August 2003): 369–75. http://dx.doi.org/10.1044/1058-0360(2003/082).

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hughes, Diana L., Marc E. Fey, and Steven H. Long. "Developmental sentence scoring." Topics in Language Disorders 12, no. 2 (February 1992): 1–12. http://dx.doi.org/10.1097/00011363-199202000-00003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Miyata, Susanne, Brian MacWhinney, Kiyoshi Otomo, Hidetosi Sirai, Yuriko Oshima-Takane, Makiko Hirakawa, Yasuhiro Shirai, Masatoshi Sugiura, and Keiko Itoh. "Developmental Sentence Scoring for Japanese." First Language 33, no. 2 (March 21, 2013): 200–216. http://dx.doi.org/10.1177/0142723713479436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hughes, Diana L., Marc E. Fey, Marilyn K. Kertoy, and Nickola Wolf Nelson. "Computer-Assisted Instruction for Learning Developmental Sentence Scoring." American Journal of Speech-Language Pathology 3, no. 3 (September 1994): 89–95. http://dx.doi.org/10.1044/1058-0360.0303.89.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pham, Giang, and Kerry Danahy Ebert. "Diagnostic Accuracy of Sentence Repetition and Nonword Repetition for Developmental Language Disorder in Vietnamese." Journal of Speech, Language, and Hearing Research 63, no. 5 (May 22, 2020): 1521–36. http://dx.doi.org/10.1044/2020_jslhr-19-00366.

Full text
Abstract:
Purpose Sentence repetition and nonword repetition assess different aspects of the linguistic system, but both have been proposed as potential tools to identify children with developmental language disorder (DLD). Cross-linguistic investigation of diagnostic tools for DLD contributes to an understanding of the core features of the disorder. This study evaluated the effectiveness of these tools for the Vietnamese language. Method A total of 104 kindergartners (aged 5;2–6;2 [years;months]) living in Vietnam participated, of which 94 were classified as typically developing and 10 with DLD. Vietnamese sentence repetition and nonword repetition tasks were administered and scored using multiple scoring systems. Sensitivity, specificity, and likelihood ratios were calculated to assess the ability of these tasks to identify DLD. Results All scoring systems on both tasks achieved adequate to excellent sensitivity or specificity, but not both. Binary scoring of sentence repetition achieved a perfect negative likelihood ratio, and binary scoring of nonword repetition approached a highly informative positive likelihood ratio. More detailed scoring systems for both tasks achieved moderately informative values for both negative and positive likelihood ratios. Conclusions Both sentence repetition and nonword repetition are valuable tools for identifying DLD in monolingual speakers of Vietnamese. Scoring systems that consider number of errors and are relatively simple (i.e., error scoring of sentence repetition and syllables scoring of nonword repetition) may be the most efficient and effective for identifying DLD. Further work to develop and refine these tasks can contribute to cross-linguistic knowledge of DLD as well as to clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
6

Kumar, K. Chandra, and Dr Sudhakar Nagalla. "Artificial Intelligence and Sentence Scoring: for Automated Summary Creation from Large-scale Documents." Journal of Advanced Research in Dynamical and Control Systems 11, no. 10-SPECIAL ISSUE (October 31, 2019): 1167–79. http://dx.doi.org/10.5373/jardcs/v11sp10/20192960.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Williamson, David M., Isaac I. Bejar, and Anne Sax. "Automated Tools for Subject Matter Expert Evaluation of Automated Scoring." Applied Measurement in Education 17, no. 4 (October 2004): 323–57. http://dx.doi.org/10.1207/s15324818ame1704_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Deane, Paul, Thomas Quinlan, and Irene Kostin. "AUTOMATED SCORING WITHIN A DEVELOPMENTAL, COGNITIVE MODEL OF WRITING PROFICIENCY." ETS Research Report Series 2011, no. 1 (June 2011): i—93. http://dx.doi.org/10.1002/j.2333-8504.2011.tb02252.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Holdgrafer, Gary. "Comparison of two Methods for Scoring Syntactic Complexity." Perceptual and Motor Skills 81, no. 2 (October 1995): 498. http://dx.doi.org/10.1177/003151259508100227.

Full text
Abstract:
Summary scores for Developmental Sentence Scoring and the Index of Productive Syntax were obtained from the language samples of 29 preterm children at preschool age. A moderate correlation obtained between these two measures of syntactic complexity. Only Index of Productive Syntax scores distinguished the language abilities of 19 neurologically normal from 10 suspect children.
APA, Harvard, Vancouver, ISO, and other styles
10

Balogh, Jennifer, Jared Bernstein, Jian Cheng, Alistair Van Moere, Brent Townshend, and Masanori Suzuki. "Validation of Automated Scoring of Oral Reading." Educational and Psychological Measurement 72, no. 3 (November 28, 2011): 435–52. http://dx.doi.org/10.1177/0013164411412590.

Full text
Abstract:
A two-part experiment is presented that validates a new measurement tool for scoring oral reading ability. Data collected by the U.S. government in a large-scale literacy assessment of adults were analyzed by a system called VersaReader that uses automatic speech recognition and speech processing technologies to score oral reading fluency. In the first part of the experiment, human raters rated oral reading performances to establish a criterion measure for comparisons with the machine scores. The goal was to measure the reliability of ratings from human raters and to determine whether or not the human raters biased their ratings in favor of or against three groups of readers: Spanish speakers, African Americans, and all other native English speakers. The result of the experiment showed that ratings from skilled human raters were extremely reliable. In addition, there was no observed scoring bias for human raters. The second part of the experiment was designed to compare the criterion human ratings with scores generated by VersaReader. Correlations between VersaReader scores and human ratings approached unity. Using G-Theory, the results showed that machine scores were almost identical to scores from human raters. Finally, the results revealed no bias in the machine scores. Implications for large-scale assessments are discussed.
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Na, Lin Xu, Li Yao Li, and Lu Xiong Xu. "Design and Implementation of an Automatic Scoring Subjective Question System Based on Domain Ontology." Advanced Materials Research 753-755 (August 2013): 3039–42. http://dx.doi.org/10.4028/www.scientific.net/amr.753-755.3039.

Full text
Abstract:
Automated assessment technology for subjective tests is one of the key techniques of exam systems. A model based on domain ontology is proposed in this paper, which can be used in exam systems to estimate subjective tests. After analysing the present research status of subjective automated assessment technology, the paper makes a study on the construction method of domain ontology by taking software engeering domain as an example. Semantic similarity calculation based on domain ontology is used for automatic assessment in this paper. The automatic assessment system can divide a sentence into a series of phrases by using the natural language processing technology and get the score by evaluating the semantic similarity of the student's answer. The experiments show that the results of the system which has certain valuable feasibility and applicability are credible and the scoring errors are acceptable.
APA, Harvard, Vancouver, ISO, and other styles
12

Yang, Yongwei, Chad W. Buckendahl, Piotr J. Juszkiewicz, and Dennison S. Bhola. "A Review of Strategies for Validating Computer-Automated Scoring." Applied Measurement in Education 15, no. 4 (October 2002): 391–412. http://dx.doi.org/10.1207/s15324818ame1504_04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Grimaldi, Phillip J., and Jeffrey D. Karpicke. "Guided retrieval practice of educational materials using automated scoring." Journal of Educational Psychology 106, no. 1 (2014): 58–68. http://dx.doi.org/10.1037/a0033208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Williamson, David M., Isaac I. Bejar, and Anne S. Hone. "'Mental Model' Comparison of Automated and Human Scoring." Journal of Educational Measurement 36, no. 2 (June 1999): 158–84. http://dx.doi.org/10.1111/j.1745-3984.1999.tb00552.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Shermis, Mark D., Sue Lottridge, and Elijah Mayfield. "The Impact of Anonymization for Automated Essay Scoring." Journal of Educational Measurement 52, no. 4 (November 2015): 419–36. http://dx.doi.org/10.1111/jedm.12093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Attali, Yigal, and Donald Powers. "Validity of Scores for a Developmental Writing Scale Based on Automated Scoring." Educational and Psychological Measurement 69, no. 6 (March 11, 2009): 978–93. http://dx.doi.org/10.1177/0013164409332217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Correnti, Richard, Lindsay Clare Matsumura, Elaine Wang, Diane Litman, Zahra Rahimi, and Zahid Kisa. "Automated Scoring of Students’ Use of Text Evidence in Writing." Reading Research Quarterly 55, no. 3 (November 13, 2019): 493–520. http://dx.doi.org/10.1002/rrq.281.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Lee, Gyoung Ho, and Kong Joo Lee. "Developing an Automated English Sentence Scoring System for Middle-school Level Writing Test by Using Machine Learning Techniques." Journal of KIISE 41, no. 11 (November 15, 2014): 911–20. http://dx.doi.org/10.5626/jok.2014.41.11.911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Wilson, Joshua, and Jessica Rodrigues. "Classification accuracy and efficiency of writing screening using automated essay scoring." Journal of School Psychology 82 (October 2020): 123–40. http://dx.doi.org/10.1016/j.jsp.2020.08.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Kersting, Nicole B., Bruce L. Sherin, and James W. Stigler. "Automated Scoring of Teachers’ Open-Ended Responses to Video Prompts." Educational and Psychological Measurement 74, no. 6 (February 5, 2014): 950–74. http://dx.doi.org/10.1177/0013164414521634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Raczynski, Kevin, and Allan Cohen. "Appraising the scoring performance of automated essay scoring systems—Some additional considerations: Which essays? Which human raters? Which scores?" Applied Measurement in Education 31, no. 3 (April 12, 2018): 233–40. http://dx.doi.org/10.1080/08957347.2018.1464449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

LaVoie, Noelle, James Parker, Peter J. Legree, Sharon Ardison, and Robert N. Kilcullen. "Using Latent Semantic Analysis to Score Short Answer Constructed Responses: Automated Scoring of the Consequences Test." Educational and Psychological Measurement 80, no. 2 (July 9, 2019): 399–414. http://dx.doi.org/10.1177/0013164419860575.

Full text
Abstract:
Automated scoring based on Latent Semantic Analysis (LSA) has been successfully used to score essays and constrained short answer responses. Scoring tests that capture open-ended, short answer responses poses some challenges for machine learning approaches. We used LSA techniques to score short answer responses to the Consequences Test, a measure of creativity and divergent thinking that encourages a wide range of potential responses. Analyses demonstrated that the LSA scores were highly correlated with conventional Consequence Test scores, reaching a correlation of .94 with human raters and were moderately correlated with performance criteria. This approach to scoring short answer constructed responses solves many practical problems including the time for humans to rate open-ended responses and the difficulty in achieving reliable scoring.
APA, Harvard, Vancouver, ISO, and other styles
23

Clauser, Brian E., Michael T. Kane, and David B. Swanson. "Validity Issues for Performance-Based Tests Scored With Computer-Automated Scoring Systems." Applied Measurement in Education 15, no. 4 (October 2002): 413–32. http://dx.doi.org/10.1207/s15324818ame1504_05.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Wagner, Kyle, Alex Smith, Abigail Allen, Kristen McMaster, Apryl Poch, and Erica Lembke. "Exploration of New Complexity Metrics for Curriculum-Based Measures of Writing." Assessment for Effective Intervention 44, no. 4 (May 28, 2018): 256–66. http://dx.doi.org/10.1177/1534508418773448.

Full text
Abstract:
Researchers and practitioners have questioned whether scoring procedures used with curriculum-based measures of writing (CBM-W) capture growth in complexity of writing. We analyzed data from six independent samples to examine two potential scoring metrics for picture word CBM-W (PW), a sentence-level CBM task. Correct word sequences per response (CWSR) and words written per response (WWR) were compared with the current standard metric of correct word sequences (CWS). Linear regression analyses indicated that CWSR predicted scores on standardized norm-referenced criterion measures in more samples than did WWR or CWS. Future studies should explore the capacity of CWSR and WWR to show growth over time, stability, diagnostic accuracy, and utility for instructional decision making.
APA, Harvard, Vancouver, ISO, and other styles
25

Jiao, Hong, Junhui Liu, Kathleen Haynie, Ada Woo, and Jerry Gorham. "Comparison Between Dichotomous and Polytomous Scoring of Innovative Items in a Large-Scale Computerized Adaptive Test." Educational and Psychological Measurement 72, no. 3 (November 8, 2011): 493–509. http://dx.doi.org/10.1177/0013164411422903.

Full text
Abstract:
This study explored the impact of partial credit scoring of one type of innovative items (multiple-response items) in a computerized adaptive version of a large-scale licensure pretest and operational test settings. The impacts of partial credit scoring on the estimation of the ability parameters and classification decisions in operational test settings were explored in one real data analysis and two simulation studies when two different polytomous scoring algorithms, automated polytomous scoring and rater-generated polytomous scoring, were applied. For the real data analyses, the ability estimates from dichotomous and polytomous scoring were highly correlated; the classification consistency between different scoring algorithms was nearly perfect. Information distribution changed slightly in the operational item bank. In the two simulation studies comparing each polytomous scoring with dichotomous scoring, the ability estimates resulting from polytomous scoring had slightly higher measurement precision than those resulting from dichotomous scoring. The practical impact related to classification decision was minor because of the extremely small number of items that could be scored polytomously in this current study.
APA, Harvard, Vancouver, ISO, and other styles
26

Eisenberg, Sarita L., Ling-Yu Guo, and Emily Mucchetti. "Eliciting the Language Sample for Developmental Sentence Scoring: A Comparison of Play With Toys and Elicited Picture Description." American Journal of Speech-Language Pathology 27, no. 2 (May 3, 2018): 633–46. http://dx.doi.org/10.1044/2017_ajslp-16-0161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Wilson, Joshua. "Universal screening with automated essay scoring: Evaluating classification accuracy in grades 3 and 4." Journal of School Psychology 68 (June 2018): 19–37. http://dx.doi.org/10.1016/j.jsp.2017.12.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Rupp, André A. "Designing, evaluating, and deploying automated scoring systems with validity in mind: Methodological design decisions." Applied Measurement in Education 31, no. 3 (April 18, 2018): 191–214. http://dx.doi.org/10.1080/08957347.2018.1464448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Philip, Rohit C., Jeffrey J. Rodriguez, Maki Niihori, Ross H. Francis, Jordan A. Mudery, Justin S. Caskey, Elizabeth Krupinski, and Abraham Jacob. "Automated High-Throughput Damage Scoring of Zebrafish Lateral Line Hair Cells After Ototoxin Exposure." Zebrafish 15, no. 2 (April 2018): 145–55. http://dx.doi.org/10.1089/zeb.2017.1451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Overton, Courtney, Taylor Baron, Barbara Zurer Pearson, and Nan Bernstein Ratner. "Using Free Computer-Assisted Language Sample Analysis to Evaluate and Set Treatment Goals for Children Who Speak African American English." Language, Speech, and Hearing Services in Schools 52, no. 1 (January 18, 2021): 31–50. http://dx.doi.org/10.1044/2020_lshss-19-00107.

Full text
Abstract:
Purpose Spoken language sample analysis (LSA) is widely considered to be a critical component of assessment for child language disorders. It is our best window into a preschool child's everyday expressive communicative skills. However, historically, the process can be cumbersome, and reference values against which LSA findings can be “benchmarked” are based on surprisingly little data. Moreover, current LSA protocols potentially disadvantage speakers of nonmainstream English varieties, such as African American English (AAE), blurring the line between language difference and disorder. Method We provide a tutorial on the use of free software (Computerized Language Analysis [CLAN]) enabled by the ongoing National Institute on Deafness and Other Communication Disorders–funded “Child Language Assessment Project.” CLAN harnesses the advanced computational power of the Child Language Data Exchange System archive ( www.childes.talkbank.org ), with an aim to develop and test fine-grained and potentially language variety–sensitive benchmarks for a range of LSA measures. Using retrospective analysis of data from AAE-speaking children, we demonstrate how CLAN LSA can facilitate dialect-fair assessment and therapy goal setting. Results Using data originally collected to norm the Diagnostic Evaluation of Language Variation, we suggest that Developmental Sentence Scoring does not appear to bias against children who speak AAE but does identify children who have language impairment (LI). Other LSA measure scores were depressed in the group of AAE-speaking children with LI but did not consistently differentiate individual children as LI. Furthermore, CLAN software permits rapid, in-depth analysis using Developmental Sentence Scoring and the Index of Productive Syntax that can identify potential intervention targets for children with developmental language disorder.
APA, Harvard, Vancouver, ISO, and other styles
31

Clauser, Brian E., David B. Swanson, and Stephen G. Clyman. "A Comparison of the Generalizability of Scores Produced by Expert Raters and Automated Scoring Systems." Applied Measurement in Education 12, no. 3 (July 1999): 281–99. http://dx.doi.org/10.1207/s15324818ame1203_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Clauser, Brian E., Melissa J. Margolis, Stephen G. Clyman, and Linette P. Ross. "Development of Automated Scoring Algorithms for Complex Performance Assessments: A Comparison of Two Approaches." Journal of Educational Measurement 34, no. 2 (June 1997): 141–61. http://dx.doi.org/10.1111/j.1745-3984.1997.tb00511.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Wilbur, Ronnie B., and Wendy C. Goodhart. "Comprehension of indefinite pronouns and quantifiers by hearing-impaired students." Applied Psycholinguistics 6, no. 4 (December 1985): 417–34. http://dx.doi.org/10.1017/s0142716400006342.

Full text
Abstract:
AbstractDeaf students' recognition of indefinite pronouns and quantifiers was tested using written materials in the form of comic strips that provided pragmatically appropriate context. One hundred and eighty-seven profoundly hearing-impaired students, aged 7–23 years, served as subjects. There were significant developmental trends for both the indefinite pronouns and the quantifiers, with the quantifiers significantly more difficult than the indefinite pronouns. A comparison of the results with predictions drawn from theoretical linguistics and with predictions drawn from Developmental Sentence Scoring (Lee, 1974) data for hearing children indicates that theoretical predictions are more accurate for hearing-impaired students. This may be due to differences in methodology (DSS reports spontaneous spoken language; the present study reports comprehension of written English) and to educational practices with hearing-impaired students.
APA, Harvard, Vancouver, ISO, and other styles
34

Clauser, Brian E., Polina Harik, and Stephen G. Clyman. "The Generalizability of Scores for a Performance Assessment Scored with a Computer-Automated Scoring System." Journal of Educational Measurement 37, no. 3 (September 2000): 245–61. http://dx.doi.org/10.1111/j.1745-3984.2000.tb01085.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Roth, Froma P., and Donna M. Clark. "Symbolic Play and Social Participation Abilities of Language-Impaired and Normally Developing Children." Journal of Speech and Hearing Disorders 52, no. 1 (February 1987): 17–29. http://dx.doi.org/10.1044/jshd.5201.17.

Full text
Abstract:
The symbolic play and social participation behaviors of 6 language-impaired and 8 normal language-learning children were compared on three measures of play: (a) the Symbolic Play Test (Lowe & Costello, 1976), (b) the Brown-Lunzer Scale (Brown, Redmond, Bass, Liebergott, & Swope, 1975), and (c) the Scale of Social Participation in Play (Tizard, Philps, & Plewis, 1976). Subject groups were equated for MLU (Brown, 1973), Developmental Sentence Scoring (Lee, 1974), and performance on the Test of Auditory Comprehension of Language (Carrow, 1973). Results indicated that the language-impaired subjects demonstrated significant deficits in symbolic, adaptive, and integrative play behaviors in comparison with the linguistically equivalent normal subjects. The language-impaired group also evidenced significantly more nonplay and significantly less solitary and parallel play than their normal peers. Findings are discussed with respect to the developmental relationship between language and cognition.
APA, Harvard, Vancouver, ISO, and other styles
36

Finestack, Lizbeth H., Bobbi Rohwer, Lisa Hilliard, and Leonard Abbeduto. "Using Computerized Language Analysis to Evaluate Grammatical Skills." Language, Speech, and Hearing Services in Schools 51, no. 2 (April 7, 2020): 184–204. http://dx.doi.org/10.1044/2019_lshss-19-00032.

Full text
Abstract:
Purpose Conducting in-depth grammatical analyses based on language samples can be time consuming. Developmental Sentence Scoring (DSS) and the Index of Productive Syntax (IPSyn) analyses provide detailed information regarding the grammatical profiles of children and can be conducted using free computer-based software. Here, we provide a tutorial to support clinicians' use of computer-based analyses to aid diagnosis and develop and monitor treatment goals. Method We analyzed language samples of a 5-year-old with developmental language disorder and an adolescent with Down syndrome using computer-based software, Computerized Language Analysis. We focused on DSS and IPSyn analyses. The tutorial includes step-by-step procedures for conducting the analyses. We also illustrate how the analyses may be used to assist in diagnosis, develop treatment goals focused on grammatical targets, and monitor progress on these treatment goals. Conclusion Clinicians should consider using Computerized Language Analysis's IPSyn and DSS analyses to support grammatical language assessments used to aid diagnosis, develop treatment goals, and monitor progress on these treatment goals. Supplemental Material https://doi.org/10.23641/asha.12021141
APA, Harvard, Vancouver, ISO, and other styles
37

Ali, Zeshan. "Automatic Text Summarization for Urdu Roman Language by Using Fuzzy Logic." Journal of Autonomous Intelligence 3, no. 2 (August 15, 2021): 23. http://dx.doi.org/10.32629/jai.v3i2.273.

Full text
Abstract:
In the new era of technology, there is the redundancy of information in the internet world, which gives a hard time for users to contain the willed outcome it, to crack this hardship we need an automated process that riddle and search the obtained facts. Text summarization is one of the normal methods to solve problems. The target of the single document epitome is to raise the possibilities of data. we have worked mostly on extractive stationed text summarization. Sentence scoring is the method usually used for extractive text summarization. In this paper, we built an Urdu Roman Language Dataset which has thirty thousand articles. We follow the Fuzzy good judgment technique to clear up the hassle of text summarization. The fuzzy logic approach model delivers Fuzzy rules which have uncertain property weight and produce an acceptable outline. Our approach is to use Cosine similarity with Fuzzy logic to suppress the extra data from the summary to boost the proposed work. We used the standard Testing Method for Fuzzy Logic Urdu Roman Text Summarization and then compared our Machine-generated summary with the help of ROUGE and BLEU Score Method. The result shows that the Fuzzy Logic approach is better than the preceding avenue by a meaningful edge.
APA, Harvard, Vancouver, ISO, and other styles
38

Hadley, Pamela A., Matthew Rispoli, Janet K. Holt, Colleen Fitzgerald, and Alison Bahnsen. "Growth of Finiteness in the Third Year of Life: Replication and Predictive Validity." Journal of Speech, Language, and Hearing Research 57, no. 3 (June 2014): 887–900. http://dx.doi.org/10.1044/2013_jslhr-l-13-0008.

Full text
Abstract:
Purpose The authors of this study investigated the validity of tense and agreement productivity (TAP) scoring in diverse sentence frames obtained during conversational language sampling as an alternative measure of finiteness for use with young children. Method Longitudinal language samples were used to model TAP growth from 21 to 30 months of age for 37 typically developing toddlers. Empirical Bayes (EB) linear and quadratic growth coefficients and child sex were then used to predict elicited grammar composite scores on the Test of Early Grammatical Impairment (TEGI; Rice & Wexler, 2001) at 36 months. Results A random-effects quadratic model with no intercept best characterized TAP growth, replicating the findings of Rispoli, Hadley, and Holt (2009). The combined regression model was significant, with the 3 variables accounting for 55.5% of the variance in the TEGI composite scores. Conclusion These findings establish TAP growth as a valid metric of finiteness in the 3rd year of life. Developmental and theoretical implications are discussed.
APA, Harvard, Vancouver, ISO, and other styles
39

Moeller, Mary Pat, and Barbara Luetke-Stahlman. "Parents' Use of Signing Exact English." Journal of Speech and Hearing Disorders 55, no. 2 (May 1990): 327–38. http://dx.doi.org/10.1044/jshd.5502.327.

Full text
Abstract:
Parental use of simultaneous communication is advocated by many programs serving hearing-impaired students. The purpose of the present study was to describe in detail the input characteristics of five hearing parents, who were attempting to use one such system, Signing Exact English or SEE 2 (Gustason, Pfetzing, & Zawolkow, 1980). The parents were intermediate-level signers, motivated to use SEE 2. Voiced and signed segments from videotaped language samples were transcribed and coded for equivalence and other features of interest. Results were that parents' signed mean lengths of utterance (MLUs) were lower than those of their children although the majority of their sign utterances were syntactically intact. Structures categorized as complex in the Developmental Sentence Scoring procedure (Lee, 1974) and considered abstract in a semantic coding scheme (Lahey, 1988) were seldom used by the parents. Parents provided a narrow range of lexical items in their sign code. Results are discussed in terms of the type of input the parents are providing and the procedures used to identify priorities for parent education.
APA, Harvard, Vancouver, ISO, and other styles
40

Safarpour, Leila, Nahid Jalilevand, Ali Ghorbani, Mahboobeh Rasouli, and Gholamreza Bayazian. "Language Sample Analysis in Children With Cleft Lip and Palate." Iranian Rehabilitation Journal 19, no. 1 (March 1, 2021): 23–30. http://dx.doi.org/10.32598/irj.19.1.523.5.

Full text
Abstract:
Objectives: Cleft Palate (CP) with or without Cleft Lip (CL/P) are the most common craniofacial birth defects. Cleft Lip and Palate (CLP) can affect children’s communication skills. The present study aimed to evaluate language production skills concerning morphology and syntax (morphosyntactic) in children with CLP. Methods: In the current cross-sectional study, 58 Persian-speaking children (28 children with CLP & 30 children without craniofacial anomalies=non-clefts) participated. Gathering the language samples of the children was conducted using the picture description method. The 50 consecutive intelligible utterances of children were analyzed by the Persian Developmental Sentence Scoring (PDSS), as a clinical morphosyntactic measurement tool. Results: The PDSS total scores of children with CLP were lower than those of the non-clefts children. A significant difference was found between the studied children with CLP and children without craniofacial anomalies in the mean value of PDSS total scores (P=0.0001). Discussion: Children with CLP demonstrate a poor ability for using morphosyntactic elements. Therefore, it should be considered how children with CLP use the grammatical components.
APA, Harvard, Vancouver, ISO, and other styles
41

Steenbergen, Petrus J., Jana Heigwer, Gunjan Pandey, Burkhard Tönshoff, Jochen Gehrig, and Jens H. Westhoff. "A Multiparametric Assay Platform for Simultaneous In Vivo Assessment of Pronephric Morphology, Renal Function and Heart Rate in Larval Zebrafish." Cells 9, no. 5 (May 20, 2020): 1269. http://dx.doi.org/10.3390/cells9051269.

Full text
Abstract:
Automated high-throughput workflows allow for chemical toxicity testing and drug discovery in zebrafish disease models. Due to its conserved structural and functional properties, the zebrafish pronephros offers a unique model to study renal development and disease at larger scale. Ideally, scoring of pronephric phenotypes includes morphological and functional assessments within the same larva. However, to efficiently upscale such assays, refinement of existing methods is required. Here, we describe the development of a multiparametric in vivo screening pipeline for parallel assessment of pronephric morphology, kidney function and heart rate within the same larva on a single imaging platform. To this end, we developed a novel 3D-printed orientation tool enabling multiple consistent orientations of larvae in agarose-filled microplates. Dorsal pronephros imaging was followed by assessing renal clearance and heart rates upon fluorescein isothiocyanate (FITC)-inulin microinjection using automated time-lapse imaging of laterally positioned larvae. The pipeline was benchmarked using a set of drugs known to induce developmental nephrotoxicity in humans and zebrafish. Drug-induced reductions in renal clearance and heart rate alterations were detected even in larvae exhibiting minor pronephric phenotypes. In conclusion, the developed workflow enables rapid and semi-automated in vivo assessment of multiple morphological and functional parameters.
APA, Harvard, Vancouver, ISO, and other styles
42

Stidham, R., H. Yao, S. Bishu, M. Rice, J. Gryak, H. J. Wilkins, and K. Najarian. "P595 Feasibility and performance of a fully automated endoscopic disease severity grading tool for ulcerative colitis using unaltered multisite videos." Journal of Crohn's and Colitis 14, Supplement_1 (January 2020): S495—S496. http://dx.doi.org/10.1093/ecco-jcc/jjz203.723.

Full text
Abstract:
Abstract Background Endoscopic assessment is a core component of disease severity in ulcerative colitis (UC), but subjectivity threatens accuracy and reproducibility. We aimed to develop and test a fully-automated video analysis system for endoscopic disease severity in UC. Methods A developmental dataset of local high-resolution UC colonoscopy videos were generated with Mayo endoscopic scores (MES) provided by experienced local reviewers. Videos were converted into still images stacks and annotated for both sufficient image quality for scoring (informativeness) and MES grade (e.g. Mayo 0,1,2,3). Convolutional neural networks (CNNs) were used to train models to predict still image informativeness and disease severity grading with 5-fold cross-validation. Whole video MES models were developed by matching reviewer MES scores with the proportion of still image predicted scores within each video using a template matching grid search. The automated whole video MES workflow was tested in a separate endoscopic video set from an international multicenter UC clinical trial (LYC-30937-EC). Cohen’s kappa coefficient with quadratic weighting was used for agreement assessment. Results The developmental set included 51 high-resolution videos (Mayo 2,3 41.2%), with the multicenter clinical trial containing 264 videos (Mayo 2,3 83.7%, p < .0001) from 157 subjects. In 34,810 frames, the still image informative classifier had excellent performance with an AUC of 0.961, sensitivity of 0.902, and specificity of 0.870. In high-resolution videos, agreement between reviewers and fully-automated MES was very good with correct prediction of exact MES in 78% (40/51,κ=0.84, 95% CI 0.75–0.92) of videos (Figure 1). In external clinical trial videos where dual central review was performed, reviewers agreed on exact MES in 82.8% (140/169) of videos (κ = 0.78, 95% CI 0.71–0.86). Automated MES grading of the clinical trial videos (often low resolution) correctly distinguished Mayo 0,1 vs. 2,3 in 83.7% (221/264) of videos. Agreement between automated and central reviewer on exact MES occurred in 57.1% of videos (κ=0.59, 95% CI 0.46–0.71), but improved to 69.5% when accounting for human reviewer disagreement. Automated MES was within 1-level of central scores in 93.5% of videos (247/264). Ordinal characteristics are shown for the automated process, predicting progressively increasing disease severity. TPR, true positive rate; FPR, false-positive rate. Conclusion Though premature for immediate deployment, these early results support the feasibility for artificial intelligence to approach expert-level endoscopic disease grading in UC.
APA, Harvard, Vancouver, ISO, and other styles
43

Subah, Faria Zarin, Kaushik Deb, Pranab Kumar Dhar, and Takeshi Koshiba. "A Deep Learning Approach to Predict Autism Spectrum Disorder Using Multisite Resting-State fMRI." Applied Sciences 11, no. 8 (April 18, 2021): 3636. http://dx.doi.org/10.3390/app11083636.

Full text
Abstract:
Autism spectrum disorder (ASD) is a complex and degenerative neuro-developmental disorder. Most of the existing methods utilize functional magnetic resonance imaging (fMRI) to detect ASD with a very limited dataset which provides high accuracy but results in poor generalization. To overcome this limitation and to enhance the performance of the automated autism diagnosis model, in this paper, we propose an ASD detection model using functional connectivity features of resting-state fMRI data. Our proposed model utilizes two commonly used brain atlases, Craddock 200 (CC200) and Automated Anatomical Labelling (AAL), and two rarely used atlases Bootstrap Analysis of Stable Clusters (BASC) and Power. A deep neural network (DNN) classifier is used to perform the classification task. Simulation results indicate that the proposed model outperforms state-of-the-art methods in terms of accuracy. The mean accuracy of the proposed model was 88%, whereas the mean accuracy of the state-of-the-art methods ranged from 67% to 85%. The sensitivity, F1-score, and area under receiver operating characteristic curve (AUC) score of the proposed model were 90%, 87%, and 96%, respectively. Comparative analysis on various scoring strategies show the superiority of BASC atlas over other aforementioned atlases in classifying ASD and control.
APA, Harvard, Vancouver, ISO, and other styles
44

Allen, Marybeth S., Marilyn K. Kertoy, John C. Sherblom, and John M. Pettit. "Children's narrative productions: A comparison of personal event and fictional stories." Applied Psycholinguistics 15, no. 2 (April 1994): 149–76. http://dx.doi.org/10.1017/s0142716400005300.

Full text
Abstract:
ABSTRACTPersonal event narratives and fictional stories are narrative genres which emerge early and undergo further development throughout the preschool and early elementary school years. This study compares personal event and fictional narratives across two language-ability groups using episodic analysis. Thirty-six normal children (aged 4 to 8 years) were divided into high and low language-ability groups using Developmental Sentence Scoring (DSS). Three fictional stories and three personal event narratives were gathered from each subject and were scored for length in communication units, total types of structures found within the narrative, and structure of the whole narrative. Narrative genre differences significantly influenced narrative structure for both language-ability groups and narrative length for the high language-ability group. Personal events were told with more reactive sequences and complete episodes than fictional stories, while fictional stories were told with more action sequences and multiple-episode structures. Compared to the episodic story structure of fictional stories, where a prototypical ‘good” story is a multiple-episode structure, a reactive sequence and/or a single complete episode structure may be an alternate, involving mature narrative forms for relating personal events. These findings suggest that narrative structures for personal event narratives and fictional stories may follow different developmental paths. Finally, differences in productive language abilities contributed to the distinctions in narrative structure between fictional stories and personal event narratives. As compared to children in the low group, children in the high group told narratives with greater numbers of complete and multiple episodes, and their fictional stories were longer than their personal event narratives.
APA, Harvard, Vancouver, ISO, and other styles
45

Westerveld, Marleen F., Pamela Filiatrault-Veilleux, and Jessica Paynter. "Inferential narrative comprehension ability of young school-age children on the autism spectrum." Autism & Developmental Language Impairments 6 (January 2021): 239694152110356. http://dx.doi.org/10.1177/23969415211035666.

Full text
Abstract:
Background and aims The purpose of the current exploratory study was to describe the inferential narrative comprehension skills of young school-age children on the autism spectrum who, as a group, are at high risk of significant and persistent reading comprehension difficulties. Our aim was to investigate whether the anticipated difficulties in inferential narrative comprehension in the group of children with autism could be explained by the children’s structural language ability as measured using a broad-spectrum standardized language test. Methods The participants were 35 children with a diagnosis of autism spectrum disorder (ASD), aged between 5;7 and 6;11, who attended their first year of formal schooling, and 32 typically developing (TD) children, matched to the ASD group for age and year of schooling. Children on the autism spectrum were divided into below normal limits (ASD_BNL, standard score ≤80; n = 21) or within normal limits (ASD_WNL, standard score >80; n = 14) on a standardized language test. All children participated in a narrative comprehension task, which involved listening to a novel story, while looking at pictures, and answering eight comprehension questions immediately afterwards. Comprehension questions were categorized into factual and inferential questions, with further categorization of the inferential questions into those tapping into the story characters’ internal responses (mental states) or not. Children’s responses were scored on a quality continuum (from 0: inadequate/off topic to 3: expected/correct). Results Our results showed significantly lower scores across factual and inferential narrative comprehension in the ASD_BNL group, compared to the ASD_WNL and TD groups, supporting the importance of structural language skills for narrative comprehension. Furthermore, the TD group significantly outperformed the children in the ASD_WNL group on inferential comprehension. Finally, the children in the ASD_WNL group showed specific difficulties in answering the internal response inferential questions compared to their TD peers. Conclusions Results from this exploratory study highlight the difficulties children on the autism spectrum may have in inferential narrative comprehension skills, regardless of sufficient structural language skills at word and sentence level. These findings support the importance of routinely assessing these narrative comprehension skills in children on the spectrum, who as a group are at high risk of persistent reading comprehension difficulties. Implications In this study, we demonstrate how narrative comprehension can be assessed in young school-age children on the autism spectrum. The scoring system used to categorize children’s responses may further assist in understanding children’s performance, across a quality continuum, which can guide detailed goal setting and assist in early targeted intervention planning.
APA, Harvard, Vancouver, ISO, and other styles
46

Elliott, Catherine, Caroline Alexander, Alison Salt, Alicia J. Spittle, Roslyn N. Boyd, Nadia Badawi, Catherine Morgan, et al. "Early Moves: a protocol for a population-based prospective cohort study to establish general movements as an early biomarker of cognitive impairment in infants." BMJ Open 11, no. 4 (April 2021): e041695. http://dx.doi.org/10.1136/bmjopen-2020-041695.

Full text
Abstract:
IntroductionThe current diagnostic pathways for cognitive impairment rarely identify babies at risk before 2 years of age. Very early detection and timely targeted intervention has potential to improve outcomes for these children and support them to reach their full life potential. Early Moves aims to identify early biomarkers, including general movements (GMs), for babies at risk of cognitive impairment, allowing early intervention within critical developmental windows to enable these children to have the best possible start to life.Method and analysisEarly Moves is a double-masked prospective cohort study that will recruit 3000 term and preterm babies from a secondary care setting. Early Moves will determine the diagnostic value of abnormal GMs (at writhing and fidgety age) for mild, moderate and severe cognitive delay at 2 years measured by the Bayley-4. Parents will use the Baby Moves smartphone application to video their babies’ GMs. Trained GMs assessors will be masked to any risk factors and assessors of the primary outcome will be masked to the GMs result. Automated scoring of GMs will be developed through applying machine-based learning to the data and the predictive value for an abnormal GM will be investigated. Screening algorithms for identification of children at risk of cognitive impairment, using the GM assessment (GMA), and routinely collected social and environmental profile data will be developed to allow more accurate prediction of cognitive outcome at 2 years. A cost evaluation for GMA implementation in preparation for national implementation will be undertaken including exploring the relationship between cognitive status and healthcare utilisation, medical costs, health-related quality of life and caregiver burden.Ethics and disseminationEthics approval has been granted by the Medical Research Ethics Committee of Joondalup Health Services and the Health Service Human Research Ethics Committee (1902) of Curtin University (HRE2019-0739).Trial registration numberACTRN12619001422112.
APA, Harvard, Vancouver, ISO, and other styles
47

Кючуков Хрісто and Віллєрз Джіл. "Language Complexity, Narratives and Theory of Mind of Romani Speaking Children." East European Journal of Psycholinguistics 5, no. 2 (December 28, 2018): 16–31. http://dx.doi.org/10.29038/eejpl.2018.5.2.kyu.

Full text
Abstract:
The paper presents research findings with 56 Roma children from Macedonia and Serbia between the ages of 3-6 years. The children’s knowledge of Romani as their mother tongue was assessed with a specially designed test. The test measures the children’s comprehension and production of different types of grammatical knowledge such as wh–questions, wh-complements, passive verbs, possessives, tense, aspect, the ability of the children to learn new nouns and new adjectives, and repetition of sentences. In addition, two pictured narratives about Theory of Mind were given to the children. The hypothesis of the authors was that knowledge of the complex grammatical categories by children will help them to understand better the Theory of Mind stories. The results show that Roma children by the age of 5 know most of the grammatical categories in their mother tongue and most of them understand Theory of Mind. References Bakalar, P. (2004). The IQ of Gypsies in Central Europe. The Mankind Quarterly, XLIV, (3&4), 291-300. Bedore L.M., Peña E.D., García, M. & Cortez, C. (2012). Conceptual versus monolingual scoring: when does it make a difference? J Speech Lang Hear Res 55(1), 1-15. Berko, J. (1958). The Child's Learning of English Morphology. Word 14, 150-177. Berman, R. & Slobin, D. (2009). Relating Events in Narrative: A Cross-Linguistic developmental Study, vol. 1. New York and London: Psychology Press. Bialystok, E. (2001). Bilingualism in development: Language literacy and cognition. Cambridge University Press: Cambridge. Bialystok, E. & Craik, F. (2010). Cognitive and Linguistic processing in the bilingual mind. Current Directions in Psychological Science, 19, (1), 19-23. Bialystok, E., Craik, F., and Freedman, M. (2007). Bilingualism as a protection against the onset of symptoms of dementia. Neuropsychologia, 45, 459-464. Brucker, J. L. (n.d). A study of Barriers to Educational Attainment in the Former Yugoslav Republic of Macedonia. www.unicef.org/ceecis/Roma_children.pdf Bruner, J. (1986). Actual mind, possible worlds. Cambridge: Harvard University Press. Carlson, S. & Meltzoff, A. (2008). Bilingual Experience and Executive Functioning. Bilingualism: Language and Cognition, 6 (1), 1-15. Chen, C. & Stevenson. H. (1988). Cross-Linguistic Differences in Digit Span of Preschool Children. Journal of Experimental Child Psychology 46, 150-158 Conti-Ramsden, S., Botting, N. & Faragher, B. (2001). Psycholinguistic Marker for specific Language Impairment (SLI). Journal of Language Psychology and Psychiatry, 42 (6), 741-748. Curenton, S. M. (2004). The association between narratives and theory of mind for low-income preschoolers. Early Education and Development, 15 (2), 120–143. Deen, Kamil Ud (2011). The Acquisition of the Passive. In de Villiers, J. & T. Roeper. (eds) Handbook of Generative Approaches to Language Acquisition (pp. 155-188). Amsterdam: John Benjamins Publisher. de Villiers, J., Pace, A., Yust, P., Takahesu Tabori, A., Hirsh-Pasek, K., Golinkoff, R. M., Iglesias, A., & Wilson, M.S. (2014). Predictive value of language processes and products for identifying language delays. Poster accepted to the Symposium on Research in Child Language Disorders, Madison, WI. de Villiers, J. G. (2015). Taking Account of Both Languages in the Assessment of Dual Language Learners. In Iglesias, A. (Ed) Special issue, Seminars in Speech, 36 (2) 120-132. de Villiers, J. G. (2005). Can language acquisition give children a point of view? In J. Astington & J. Baird (Eds.), Why Language Matters for Theory of Mind. (pp186-219) New York: Oxford Press. de Villiers J. G. & Pyers, J. (2002). Complements to Cognition: A Longitudinal Study of the Relationship between Complex Syntax and False-Belief Understanding. Cognitive Development, 17: 1037-1060. de Villiers, J. G., Roeper, T., Bland-Stewart, L. & Pearson, B. (2008). Answering hard questions: wh-movement across dialects and disorder. Applied Psycholinguistics, 29: 67-103. Friedman, E., Gallová Kriglerová, E., Kubánová, M. & Slosiarik, M. (2009). School as Ghetto: Systemic Overrepresentation of Roma in Special Education in Slovakia. Roma Education Fund. ERRC (European Roma Rights Center) (1999). A special remedy: Roma and Special schools for the Mentally Handicapped in the Czech Republic. Country Reports Series no. 8 (June) ERRC (European Roma Rights Centre) (2014). Overcoming barriers: Ensuring that the Roma children are fully engaged and achieving in education. The office for standards in education. online at http://www.errc.org ERRC (European Roma Rights Centre) (2015). Czech Republic: Eight years after the D.H. judgment a comprehensive desegregation of schools must take place http://www.errc.org Fremlova, L. & Ureche, H. (2011). From Segregation to Inclusion: Roma pupils in the United Kingdom. A Pilot research Project. Budapest: Roma Education Fund. Gleitman, L., Cassidy, K., Nappa, R., Papafragou, A. & Trueswell, J. (2005). Hard words. Language Learning and Development, 1, 23-64. Goetz, P. (2003). The effects of bilingualism on theory of mind development. Bilingualism: Language and Cognition. 6. 1-15. Hart, B. & Risley, T.R (1995). Meaningful Differences in the Everyday Experiences of Young American Children. Baltimore, MD: Brookes Publishing Heath, S. B. (1982). What no Bedtime Story Means: Narrative skills at home and at school. In Language and Society. 11.2:49-76. Hirsh-Pasek, K., Kochanoff, A., Newcombe, N. & de Villiers, J.G. (2005). Using scientific knowledge to inform preschool assessment: making the case for empirical validity. Social Policy report (SRCD) Volume XIX, 1, 3-19. Hirsh-Pasek K., Adamson, I.B., Bakeman, R., Tresch Owen, M., Golinkoff, R.M., Pace, A., Yust, P & Suma, K. (2015). The Contribution of Early Communication Quality to Low- Income Children’s Language Success. Psychological Science Online First, June 5, 2015 doi:10.1177/0956797615581493 Hoff, E. (2013). Interpreting the early language trajectories of children from low-SES and language minority homes: implications for closing achievement gaps. Developmental Psychology, 49(1):4-14. Hoff, E. & Elledge, C. (2006). Bilingualism as One of Many Environmental Variables that Affect Language Development in Young Children. In J. Cohen, K. McAlister & J. MacSwan (Eds.), Proceedings of the 4th International symposium on Bilingualism (pp. 1034-1040). Somerville, Ma: Cascadilla press. Hoge, W. (1998). A Swedish Dilemma: The Immigrant Ghetto. The New York Times, October 6th. Kovacs, A. (2009). Early Bilingualism Enhances Mechanisms of False-Belief Reasoning. Developmental Science, 12 (1), 48-54. Kyuchukov, H. (2005). Early socialization of Roma children in Bulgaria. In: X. P. Rodriguez-Yanez, A. M. Lorenzo Suarez & F. Ramallo (Eds.), Bilingualism and Education: From the Family to the School. Muenchen: Lincom Europa. (pp. 161-168) Kyuchukov, H. (2010) Romani language competence. In: J. Balvin and L. Kwadrants (Eds.), Situation of Roma Minority in Czech, Hungary, Poland and Slovakia (pp. 427-465). Wroclaw: Prom. Kyuchukov, H. (2014). Acquisition of Romani in a Bilingual Context. Psychology of Language and Communication, vol. 18 (3), 211-225. Kyuchukov, H. (2013). Romani language education and identity among the Roma children in European context. In: J. Balvin, L. Kwadrans and H. Kyuchukov (eds) Roma in Visegrad Countries: History, Culture, Social Integration, Social work and Education (pp. 465-471). Wroclaw: Prom. Kyuchukov, H. (2015). Socialization of Roma children through Roma oral culture. In: Socializaciya rastushego cheloveka v kontekste progressyivnyih nauchnich ideii XXI veka: socialnoe razvitie detey doshkolnogo vozrastta. [Socialization of the growing man in the context of progressive ideas of the XXI c.: social development of the preschool age children] Proceedings form the First international All-Russia conference, 1-3 April, Yakutsk, pp. 798-802. Kyuchukov, H. & de Villiers, J. (2009). Theory of Mind and Evidentiality in Romani-Bulgarian Bilingual children. Psychology of Language and Communication, 13(2), 21-34. Kyuchukov, H. & de Villiers, J. (2014a). Roma children’s knowledge on Romani. Journal of Psycholinguistics, 19, 58-65. Kyuchukov, H. & de Villiers, J. (2014b). Addressing the rights of Roma children for a language assessment in their native language of Romani. Poster presented at the 35th Annual Symposium on Research in Child Language Disorders in Madison, Wisconsin June 12-14. Lajčakova, J. (2013). Civil Society Monitoring Report on the Implementation of the National Roma Integration Strategy and Roma Decade Action Plan in 2012 in Slovakia. Budapest: Decade of Roma Inclusion. Secretariat Foundation. Landry, S. and the School Readiness Research Consortium (2014). Enhancing Early Child Care Quality and Learning for Toddlers at Risk: The Responsive Early Childhood Program. Developmental Psychology, 50 (2), 526-541. Lust, B., Flynn, S. & Foley, C. (1996). What Children Know about What They Say: Elicited Imitation as a Research Method for Assessing Children's Syntax. In D. McDaniel, C. McKee, & H. Smith Cairns (Eds.), Methods for Assessing Children's Syntax (pp. 55-76). Cambridge, Mass.: MIT Press. Maratsos, M., Fox, D.E.C., Becker, J.A. & Chalkley, M.A. (1985). Semantic restrictions on children’s passives. Cognition, 19, 167-191. Merz, E.C. Zucker, T.A., Landry, S.H. Williams, J., Assel, M., Taylor, H.B, Lonigan, C.L., Phillips, B., Clancy-Menchetti, J., Barnes, M., Eisenberg, N., de Villiers, J. (2015). Parenting predictors of cognitive skills and emotion knowledge in socioeconomically disadvantaged preschoolers. Journal of Experimental Child Psychology 132, 14-31 Pearson, B. Z., Jackson, J. E., & Wu, H. (2014). Seeking a valid gold standard for an innovative dialect-neutral language test. Journal of Speech-Language and Hearing Research. 57(2). 495-508. Reger, Z. (1999). Teasing in the linguistic socialization of Gypsy children in Hungary. Acta Linguistica Hungarica, 46, 289-315. Réger, Z. and Berko-Gleason, J. (1991). Romāni Child-Directed Speech and Children's Language among Gypsies in Hungary Language in Society, 20 (4), 601-617. Roeper, T & de Villiers, J.G. (2011). The acquisition path for wh-questions. In de Villiers, J.G. & Roeper, T. (Eds), Handbook of Generative Approaches to Language Acquisition. Springer. Seymour, H., Roeper, T. & de Villiers, J. (2005). The DELV-NR. (Norm-referenced version) The Diagnostic Evaluation of Language Variation. The Psychological Corporation, San Antonio. Schulz, P. & Roeper, T. (2011). Acquisition of exhaustively in wh-questions: a semantic dimensions of SLI. Lingua, 121(3), 383-407. Stokes, S. F., Wong, A. M-Y., Fletcher, P., & Leonard, L. B. (2006). Nonword repetition and sentence repetition as clinical markers of SLI: The case of Cantonese. Journal of Speech, Language and Hearing Research, 49(2), 219-236. Vassilev, R. (2004). The Roma of Bulgaria: A Pariah Minority. The Global Review of Ethnopolitics, 3 (2), 40-51. Wellman, H.M., Cross, D., & Watson, J. (2001). Meta-analysis of theory-of-mind development: The truth about false belief. Child Development, 72, 655-684. Wimmer, H., & Perner, J. (1983). Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children’s understanding of deception. Cognition, 13, 103–128.
APA, Harvard, Vancouver, ISO, and other styles
48

Bosker, Hans Rutger. "Using fuzzy string matching for automated assessment of listener transcripts in speech intelligibility studies." Behavior Research Methods, March 10, 2021. http://dx.doi.org/10.3758/s13428-021-01542-4.

Full text
Abstract:
AbstractMany studies of speech perception assess the intelligibility of spoken sentence stimuli by means of transcription tasks (‘type out what you hear’). The intelligibility of a given stimulus is then often expressed in terms of percentage of words correctly reported from the target sentence. Yet scoring the participants’ raw responses for words correctly identified from the target sentence is a time-consuming task, and hence resource-intensive. Moreover, there is no consensus among speech scientists about what specific protocol to use for the human scoring, limiting the reliability of human scores. The present paper evaluates various forms of fuzzy string matching between participants’ responses and target sentences, as automated metrics of listener transcript accuracy. We demonstrate that one particular metric, the token sort ratio, is a consistent, highly efficient, and accurate metric for automated assessment of listener transcripts, as evidenced by high correlations with human-generated scores (best correlation: r = 0.940) and a strong relationship to acoustic markers of speech intelligibility. Thus, fuzzy string matching provides a practical tool for assessment of listener transcript accuracy in large-scale speech intelligibility studies. See https://tokensortratio.netlify.app for an online implementation.
APA, Harvard, Vancouver, ISO, and other styles
49

Gale, Robert, Julie Bird, Yiyi Wang, Jan van Santen, Emily Prud'hommeaux, Jill Dolata, and Meysam Asgari. "Automated Scoring of Tablet-Administered Expressive Language Tests." Frontiers in Psychology 12 (July 22, 2021). http://dx.doi.org/10.3389/fpsyg.2021.668401.

Full text
Abstract:
Speech and language impairments are common pediatric conditions, with as many as 10% of children experiencing one or both at some point during development. Expressive language disorders in particular often go undiagnosed, underscoring the immediate need for assessments of expressive language that can be administered and scored reliably and objectively. In this paper, we present a set of highly accurate computational models for automatically scoring several common expressive language tasks. In our assessment framework, instructions and stimuli are presented to the child on a tablet computer, which records the child's responses in real time, while a clinician controls the pace and presentation of the tasks using a second tablet. The recorded responses for four distinct expressive language tasks (expressive vocabulary, word structure, recalling sentences, and formulated sentences) are then scored using traditional paper-and-pencil scoring and using machine learning methods relying on a deep neural network-based language representation model. All four tasks can be scored automatically from both clean and verbatim speech transcripts with very high accuracy at the item level (83−99%). In addition, these automated scores correlate strongly and significantly (ρ = 0.76–0.99, p < 0.001) with manual item-level, raw, and scaled scores. These results point to the utility and potential of automated computationally-driven methods of both administering and scoring expressive language tasks for pediatric developmental language evaluation.
APA, Harvard, Vancouver, ISO, and other styles
50

Rajagede, Rian Adam. "Improving Automatic Essay Scoring for Indonesian Language using Simpler Model and Richer Feature." Kinetik: Game Technology, Information System, Computer Network, Computing, Electronics, and Control, February 28, 2021, 11–18. http://dx.doi.org/10.22219/kinetik.v6i1.1196.

Full text
Abstract:
Automatic essay scoring is a machine learning task where we create a model that can automatically assess student essay answers. Automated essay scoring will be instrumental when the answer assessment process is on a large scale so that manual correction by humans can cause several problems. In 2019, the Ukara dataset was released for automatic essay scoring in the Indonesian language. The best model that has been published using the dataset produces an F1-score of 0.821 using pre-trained fastText sentence embedding and the stacking model between the neural network and XGBoost. In this study, we propose to use a simpler classifier model using a single hidden layer neural network but using a richer feature, namely BERT sentence embedding. Pre-trained model BERT sentence embedding extracts more information from sentences but has a smaller file size than fastText pre-trained model. The best model we propose manages to get a higher F1-score than the previous models on the Ukara dataset, which is 0.829.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography