Academic literature on the topic 'Automated Developmental Sentence Scoring'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Automated Developmental Sentence Scoring.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Automated Developmental Sentence Scoring"

1

Channell, Ron W. "Automated Developmental Sentence Scoring Using Computerized Profiling Software." American Journal of Speech-Language Pathology 12, no. 3 (August 2003): 369–75. http://dx.doi.org/10.1044/1058-0360(2003/082).

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hughes, Diana L., Marc E. Fey, and Steven H. Long. "Developmental sentence scoring." Topics in Language Disorders 12, no. 2 (February 1992): 1–12. http://dx.doi.org/10.1097/00011363-199202000-00003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Miyata, Susanne, Brian MacWhinney, Kiyoshi Otomo, Hidetosi Sirai, Yuriko Oshima-Takane, Makiko Hirakawa, Yasuhiro Shirai, Masatoshi Sugiura, and Keiko Itoh. "Developmental Sentence Scoring for Japanese." First Language 33, no. 2 (March 21, 2013): 200–216. http://dx.doi.org/10.1177/0142723713479436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hughes, Diana L., Marc E. Fey, Marilyn K. Kertoy, and Nickola Wolf Nelson. "Computer-Assisted Instruction for Learning Developmental Sentence Scoring." American Journal of Speech-Language Pathology 3, no. 3 (September 1994): 89–95. http://dx.doi.org/10.1044/1058-0360.0303.89.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pham, Giang, and Kerry Danahy Ebert. "Diagnostic Accuracy of Sentence Repetition and Nonword Repetition for Developmental Language Disorder in Vietnamese." Journal of Speech, Language, and Hearing Research 63, no. 5 (May 22, 2020): 1521–36. http://dx.doi.org/10.1044/2020_jslhr-19-00366.

Full text
Abstract:
Purpose Sentence repetition and nonword repetition assess different aspects of the linguistic system, but both have been proposed as potential tools to identify children with developmental language disorder (DLD). Cross-linguistic investigation of diagnostic tools for DLD contributes to an understanding of the core features of the disorder. This study evaluated the effectiveness of these tools for the Vietnamese language. Method A total of 104 kindergartners (aged 5;2–6;2 [years;months]) living in Vietnam participated, of which 94 were classified as typically developing and 10 with DLD. Vietnamese sentence repetition and nonword repetition tasks were administered and scored using multiple scoring systems. Sensitivity, specificity, and likelihood ratios were calculated to assess the ability of these tasks to identify DLD. Results All scoring systems on both tasks achieved adequate to excellent sensitivity or specificity, but not both. Binary scoring of sentence repetition achieved a perfect negative likelihood ratio, and binary scoring of nonword repetition approached a highly informative positive likelihood ratio. More detailed scoring systems for both tasks achieved moderately informative values for both negative and positive likelihood ratios. Conclusions Both sentence repetition and nonword repetition are valuable tools for identifying DLD in monolingual speakers of Vietnamese. Scoring systems that consider number of errors and are relatively simple (i.e., error scoring of sentence repetition and syllables scoring of nonword repetition) may be the most efficient and effective for identifying DLD. Further work to develop and refine these tasks can contribute to cross-linguistic knowledge of DLD as well as to clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
6

Kumar, K. Chandra, and Dr Sudhakar Nagalla. "Artificial Intelligence and Sentence Scoring: for Automated Summary Creation from Large-scale Documents." Journal of Advanced Research in Dynamical and Control Systems 11, no. 10-SPECIAL ISSUE (October 31, 2019): 1167–79. http://dx.doi.org/10.5373/jardcs/v11sp10/20192960.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Williamson, David M., Isaac I. Bejar, and Anne Sax. "Automated Tools for Subject Matter Expert Evaluation of Automated Scoring." Applied Measurement in Education 17, no. 4 (October 2004): 323–57. http://dx.doi.org/10.1207/s15324818ame1704_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Deane, Paul, Thomas Quinlan, and Irene Kostin. "AUTOMATED SCORING WITHIN A DEVELOPMENTAL, COGNITIVE MODEL OF WRITING PROFICIENCY." ETS Research Report Series 2011, no. 1 (June 2011): i—93. http://dx.doi.org/10.1002/j.2333-8504.2011.tb02252.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Holdgrafer, Gary. "Comparison of two Methods for Scoring Syntactic Complexity." Perceptual and Motor Skills 81, no. 2 (October 1995): 498. http://dx.doi.org/10.1177/003151259508100227.

Full text
Abstract:
Summary scores for Developmental Sentence Scoring and the Index of Productive Syntax were obtained from the language samples of 29 preterm children at preschool age. A moderate correlation obtained between these two measures of syntactic complexity. Only Index of Productive Syntax scores distinguished the language abilities of 19 neurologically normal from 10 suspect children.
APA, Harvard, Vancouver, ISO, and other styles
10

Balogh, Jennifer, Jared Bernstein, Jian Cheng, Alistair Van Moere, Brent Townshend, and Masanori Suzuki. "Validation of Automated Scoring of Oral Reading." Educational and Psychological Measurement 72, no. 3 (November 28, 2011): 435–52. http://dx.doi.org/10.1177/0013164411412590.

Full text
Abstract:
A two-part experiment is presented that validates a new measurement tool for scoring oral reading ability. Data collected by the U.S. government in a large-scale literacy assessment of adults were analyzed by a system called VersaReader that uses automatic speech recognition and speech processing technologies to score oral reading fluency. In the first part of the experiment, human raters rated oral reading performances to establish a criterion measure for comparisons with the machine scores. The goal was to measure the reliability of ratings from human raters and to determine whether or not the human raters biased their ratings in favor of or against three groups of readers: Spanish speakers, African Americans, and all other native English speakers. The result of the experiment showed that ratings from skilled human raters were extremely reliable. In addition, there was no observed scoring bias for human raters. The second part of the experiment was designed to compare the criterion human ratings with scores generated by VersaReader. Correlations between VersaReader scores and human ratings approached unity. Using G-Theory, the results showed that machine scores were almost identical to scores from human raters. Finally, the results revealed no bias in the machine scores. Implications for large-scale assessments are discussed.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Automated Developmental Sentence Scoring"

1

Judson, Carrie Ann. "Accuracy of Automated Developmental Sentence Scoring Software." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1448.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Janis, Sarah Elizabeth. "A Comparison of Manual and Automated Grammatical Precoding on the Accuracy of Automated Developmental Sentence Scoring." BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/5892.

Full text
Abstract:
Developmental Sentence Scoring (DSS) is a standardized language sample analysis procedure that evaluates and scores a child's use of standard American-English grammatical rules within complete sentences. Automated DSS programs have the potential to increase the efficiency and reduce the amount of time required for DSS analysis. The present study examines the accuracy of one automated DSS software program, DSSA 2.0, compared to manual DSS scoring on previously collected language samples from 30 children between the ages of 2-5 and 7-11. Additionally, this study seeks to determine the source of error in the automated score by comparing DSSA 2.0 analysis given manually versus automatedly assigned grammatical tag input. The overall accuracy of DSSA 2.0 was 86%; the accuracy of individual grammatical category-point value scores varied greatly. No statistically significant difference was found between the two DSSA 2.0 input conditions (manual vs. automated tags) suggesting that the underlying grammatical tagging is not the primary source of error in DSSA 2.0 analysis.
APA, Harvard, Vancouver, ISO, and other styles
3

Chamberlain, Laurie Lynne. "Mean Length of Utterance and Developmental Sentence Scoring in the Analysis of Children's Language Samples." BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/5966.

Full text
Abstract:
Developmental Sentence Scoring (DSS) is a standardized language sample analysis procedure that uses complete sentences to evaluate and score a child’s use of standard American-English grammatical rules. Automated DSS software can potentially increase efficiency and decrease the time needed for DSS analysis. This study examines the accuracy of one automated DSS software program, DSSA Version 2.0, compared to manual DSS scoring on previously collected language samples from 30 children between the ages of 2;5 and 7;11 (years;months). The overall accuracy of DSSA 2.0 was 86%. Additionally, the present study sought to determine the relationship between DSS, DSSA Version 2.0, the mean length of utterance (MLU), and age. MLU is a measure of linguistic ability in children, and is a widely used indicator of language impairment. This study found that MLU and DSS are both strongly correlated with age and these correlations are statistically significant, r = .605, p < .001 and r = .723, p < .001, respectively. In addition, MLU and DSSA were also strongly correlated with age and these correlations were statistically significant, r = .605, p < .001 and r = .669, p < .001, respectively. The correlation between MLU and DSS was high and statistically significant r = .873, p < .001, indicating that the correlation between MLU and DSS is not simply an artifact of both measures being correlated with age. Furthermore, the correlation between MLU and DSSA was high, r = .794, suggesting that the correlation between MLU and DSSA is not simply an artifact of both variables being correlated with age. Lastly, the relationship between DSS and age while controlling for MLU was moderate, but still statistically significant r = .501, p = .006. Therefore, DSS appears to add information beyond MLU.
APA, Harvard, Vancouver, ISO, and other styles
4

Millet, Deborah. "Automated Grammatical Tagging of Language Samples from Children with and without Language Impairment." BYU ScholarsArchive, 2003. https://scholarsarchive.byu.edu/etd/1139.

Full text
Abstract:
Grammatical classification ("tagging") of words in language samples is a component of syntactic analysis for both clinical and research purposes. Previous studies have shown that probability-based software can be used to tag samples from adults and typically-developing children with high (about 95%) accuracy. The present study found that similar accuracy can be obtained in tagging samples from school-aged children with and without language impairment if the software uses tri-gram rather than bi-gram probabilities and large corpora are used to obtain probability information to train the tagging software.
APA, Harvard, Vancouver, ISO, and other styles
5

Callan, Peggy Ann. "Developmental sentence scoring sample size comparison." PDXScholar, 1990. https://pdxscholar.library.pdx.edu/open_access_etds/4170.

Full text
Abstract:
In 1971, Lee and Canter developed a systematic tool for assessing children's expressive language: Developmental Sentence Scoring (DSS). It provides normative data against which a child's delayed or disordered language development can be compared with the normal language of children the same age. A specific scoring system is used to analyze children's use of standard English grammatical rules from a tape-recorded sample of their spontaneous speech during conversation with a clinician. The corpus of sentences for the DSS is obtained from a sample of 50 complete, different, consecutive, intelligible, non-echolalic sentences elicited from a child in conversation with an adult using stimulus materials in which the child is interested. There is limited research on the reliability of language samples smaller and larger than 50 utterances for DSS analysis. The purpose of this study was to determine if there is a significant difference among the scores obtained from language samples of 25, 50, and 75 utterances when using the DSS procedure for children aged 6.0 to 6.6 years. Twelve children, selected on the basis of chronological age, normal receptive vocabulary skills, normal hearing, and a monolingual background, were chosen as subjects.
APA, Harvard, Vancouver, ISO, and other styles
6

Seal, Amy. "Scoring sentences developmentally : an analog of developmental sentence scoring /." Diss., CLICK HERE for online access:, 2001. http://contentdm.lib.byu.edu/ETD/image/etd12.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Seal, Amy. "Scoring Sentences Developmentally: An Analog of Developmental Sentence Scoring." BYU ScholarsArchive, 2002. https://scholarsarchive.byu.edu/etd/1141.

Full text
Abstract:
A variety of tools have been developed to assist in the quantification and analysis of naturalistic language samples. In recent years, computer technology has been employed in language sample analysis. This study compares a new automated index, Scoring Sentences Developmentally (SSD), to two existing measures. Eighty samples from three corpora were manually analyzed using DSS and MLU and the processed by the automated software. Results show all three indices to be highly correlated, with correlations ranging from .62 to .98. The high correlations among scores support further investigation of the psychometric characteristics of the SSD software to determine its clinical validity and reliability. Results of this study suggest that SSD has the potential to compliment other analysis procedures in assessing the language development of young children.
APA, Harvard, Vancouver, ISO, and other styles
8

Dong, Cheryl Diane. "A comparative study of three language sampling methods using developmental sentence scoring." PDXScholar, 1986. https://pdxscholar.library.pdx.edu/open_access_etds/3589.

Full text
Abstract:
The present study sought to determine the effect different stimulus material has on the language elicited from children. Its purpose was to determine whether a significant difference existed among language samples elicited three different ways when analyzed using DSS. Eighteen children between the ages of 3.6 and 5.6 years were chosen to participate in the study. All of the children had normal bearing. normal receptive vocabulary skills and no demonstrated or suspected physical or social delays. Three language samples. each elicited by either toys. pictures. or stories. were obtained from each child. For each sample. a corpus of 50 utterances was selected for analysis and analyzed according to the DSS procedure as described by Lee and Ganter (1971).
APA, Harvard, Vancouver, ISO, and other styles
9

Miniard, Angela Christine. "Construction of a Scoring Manual for the Sentence Stem “A Good Boss—” for the Sentence Completion Test Integral (SCTi-MAP)." Cleveland, Ohio : Cleveland State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=csu1242662653.

Full text
Abstract:
Thesis (M.Ed.)--Cleveland State University, 2009.
Abstract. Title from PDF t.p. (viewed on June 11, 2009). Includes bibliographical references (p. 101-105). Available online via the OhioLINK ETD Center. Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
10

Moore, Allen Travis. "Applying the Developmental Path of English Negation to the Automated Scoring of Learner Essays." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/6835.

Full text
Abstract:
The resources required to have humans score extended written response items in English language learner (ELL) contexts has caused automated essay scoring (AES) to emerge as a desired alternative. However, these systems often rely heavily on indirect proxies of writing quality such as word, sentence, and essay lengths because of their strong correlation to scores (Vajjala, 2017). This has led to concern about the validity of the features used to establish the predictive accuracy of AES systems (Attali, 2007; Weigle, 2013). Reliance on construct-irrelevant features in ELL contexts also forfeits the opportunity to provide meaningful diagnostic feedback to test-takers or provide the second language acquisition (SLA) field with real insights (C.-F. E. Chen & Cheng, 2008). This thesis seeks to improve the validity and reliability of an AES system developed for ELL essays by employing a new set of features based on the acquisition order of English negation. Modest improvements were made to a baseline AES system's accuracy, showing the possibility and importance of engineering features relevant to the construct being assessed in ELL essays. In addition to these findings, a novel ordering of the sequence of English negation acquisition not previously described in SLA research emerged.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Automated Developmental Sentence Scoring"

1

Efat, Md Iftekharul Alam, Mohammad Ibrahim, and Humayun Kayesh. "Automated Bangla text summarization by sentence scoring and ranking." In 2013 2nd International Conference on Informatics, Electronics and Vision (ICIEV). IEEE, 2013. http://dx.doi.org/10.1109/iciev.2013.6572686.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chandro, Porimol, Md Faizul Huq Arif, Md Mahbubur Rahman, Md Saeed Siddik, Mohammad Sayeedur Rahman, and Md Abdur Rahman. "Automated Bengali Document Summarization by Collaborating Individual Word & Sentence Scoring." In 2018 21st International Conference of Computer and Information Technology (ICCIT). IEEE, 2018. http://dx.doi.org/10.1109/iccitechn.2018.8631926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Song, Wei, Ziyao Song, Lizhen Liu, and Ruiji Fu. "Hierarchical Multi-task Learning for Organization Evaluation of Argumentative Student Essays." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/536.

Full text
Abstract:
Organization evaluation is an important dimension of automated essay scoring. This paper focuses on discourse element (i.e., functions of sentences and paragraphs) based organization evaluation. Existing approaches mostly separate discourse element identification and organization evaluation. In contrast, we propose a neural hierarchical multi-task learning approach for jointly optimizing sentence and paragraph level discourse element identification and organization evaluation. We represent the organization as a grid to simulate the visual layout of an essay and integrate discourse elements at multiple linguistic levels. Experimental results show that the multi-task learning based organization evaluation can achieve significant improvements compared with existing work and pipeline baselines. Multiple level discourse element identification also benefits from multi-task learning through mutual enhancement.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Automated Developmental Sentence Scoring"

1

Valenciano, Marilyn. Developmental sentence scoring sample size comparison. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.3108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Callan, Peggy. Developmental sentence scoring sample size comparison. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.6053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

McCluskey, Kathryn. Developmental sentence scoring : a comparative study conducted in Portland, Oregon. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.5250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dong, Cheryl. A comparative study of three language sampling methods using developmental sentence scoring. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.5473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tilden-Browning, Stacy. A comparative study of the developmental sentence scoring normative data obtained in Canby, Oregon, and the Midwest, for children between the ages of 6.0 and 6.11 years. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.5402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

McNutt, Eileen. A comparative study of the developmental sentence scoring normative data obtained in Portland, Oregon, and the Midwest, for children between the ages of 5.0 and 5.11 years. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.5431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography