To see the other types of publications on this topic, follow the link: Automated language sample analysis.

Dissertations / Theses on the topic 'Automated language sample analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 48 dissertations / theses for your research on the topic 'Automated language sample analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Manning, Britney Richey. "Automated Identification of Noun Clauses in Clinical Language Samples." BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/2197.

Full text
Abstract:
The identification of complex grammatical structures including noun clauses is of clinical importance because differences in the use of these structures have been found between individuals with and without language impairment. In recent years, computer software has been used to assist in analyzing clinical language samples. However, this software has been unable to accurately identify complex syntactic structures such as noun clauses. The present study investigated the accuracy of new software, called Cx, in identifying finite wh- and that-noun clauses. Two sets of language samples were used. One set included 10 children with language impairment, 10 age-matched peers, and 10 language-matched peers. The second set included 40 adults with mental retardation. Levels of agreement between computerized and manual analysis were similar for both sets of language samples; Kappa levels were high for wh-noun clauses and very low for that-noun clauses.
APA, Harvard, Vancouver, ISO, and other styles
2

Clark, Jessica Celeste. "Automated Identification of Adverbial Clauses in Child Language Samples." Diss., CLICK HERE for online access, 2009. http://contentdm.lib.byu.edu/ETD/image/etd2803.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Winiecke, Rachel Christine. "Precoding and the Accuracy of Automated Analysis of Child Language Samples." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5867.

Full text
Abstract:
Language sample analysis is accepted as the gold standard in child language assessment. Unfortunately it is often viewed as too time consuming for the practicing clinician. Over the last 15 years a great deal of research has been invested in the automated analysis of child language samples to make the process more time efficient. One step in the analysis process may be precoding the sample, as is used in the Systematic Analysis of Language Transcripts (SALT) software. However, a claim has been made (MacWhinney, 2008) that such precoding in fact leads to lower accuracy because of manual coding errors. No data on this issue have been published. The current research measured the accuracy of language samples analyzed with and without SALT precoding. This study also compared the accuracy of current software to an older version called GramCats (Channell & Johnson 1999). The results presented support the use of precoding schemes such as SALT and suggest that the accuracy of automated analysis has improved over time.
APA, Harvard, Vancouver, ISO, and other styles
4

Minch, Stacy Lynn. "Validity of Seven Syntactic Analyses Performed by the Computerized Profiling Software." Diss., CLICK HERE for online access, 2009. http://contentdm.lib.byu.edu/ETD/image/etd2956.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chamberlain, Laurie Lynne. "Mean Length of Utterance and Developmental Sentence Scoring in the Analysis of Children's Language Samples." BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/5966.

Full text
Abstract:
Developmental Sentence Scoring (DSS) is a standardized language sample analysis procedure that uses complete sentences to evaluate and score a child’s use of standard American-English grammatical rules. Automated DSS software can potentially increase efficiency and decrease the time needed for DSS analysis. This study examines the accuracy of one automated DSS software program, DSSA Version 2.0, compared to manual DSS scoring on previously collected language samples from 30 children between the ages of 2;5 and 7;11 (years;months). The overall accuracy of DSSA 2.0 was 86%. Additionally, the present study sought to determine the relationship between DSS, DSSA Version 2.0, the mean length of utterance (MLU), and age. MLU is a measure of linguistic ability in children, and is a widely used indicator of language impairment. This study found that MLU and DSS are both strongly correlated with age and these correlations are statistically significant, r = .605, p < .001 and r = .723, p < .001, respectively. In addition, MLU and DSSA were also strongly correlated with age and these correlations were statistically significant, r = .605, p < .001 and r = .669, p < .001, respectively. The correlation between MLU and DSS was high and statistically significant r = .873, p < .001, indicating that the correlation between MLU and DSS is not simply an artifact of both measures being correlated with age. Furthermore, the correlation between MLU and DSSA was high, r = .794, suggesting that the correlation between MLU and DSSA is not simply an artifact of both variables being correlated with age. Lastly, the relationship between DSS and age while controlling for MLU was moderate, but still statistically significant r = .501, p = .006. Therefore, DSS appears to add information beyond MLU.
APA, Harvard, Vancouver, ISO, and other styles
6

Michaelis, Hali Anne. "Automated Identification of Relative Clauses in Child Language Samples." BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/1997.

Full text
Abstract:
Previously existing computer analysis programs have been unable to correctly identify many complex syntactic structures thus requiring further manual analysis by the clinician. Complex structures, including the relative clause, are of interest in child language samples due to the difference in development between children with and without language impairment. The purpose of this study was to assess the comparability of results from a new automated program, Cx, to results from manual identification of relative clauses. On language samples from 10 children with language impairment (LI), 10 language matched peers (LA), and 10 chronologically age matched peers (CA), a computerized analysis based on probabilities of sequences of grammatical markers agreed with a manual analysis with a Kappa of 0.88.
APA, Harvard, Vancouver, ISO, and other styles
7

Ehlert, Erika E. "Automated Identification of Relative Clauses in Child Language Samples." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3615.

Full text
Abstract:
Relative clauses are grammatical constructions that are of relevance in both typical and impaired language development. Thus, the accurate identification of these structures in child language samples is clinically important. In recent years, computer software has been used to assist in the automated analysis of clinical language samples. However, this software has had only limited success when attempting to identify relative clauses. The present study explores the development and clinical importance of relative clauses and investigates the accuracy of the software used for automated identification of these structures. Two separate collections of language samples were used. The first collection included 10 children with language impairment, ranging in age from 7;6 to 11;1 (years;months), 10 age-matched peers, and 10 language-matched peers. A second collection contained 30 children considered to have typical speech and language skills and who ranged in age from 2;6 to 7;11. Language samples were manually coded for the presence of relative clauses (including those containing a relative pronoun, those without a relative pronoun and reduced relative clauses). These samples were then tagged using computer software and finally tabulated and compared for accuracy. ANACOVA revealed a significant difference in the frequency of relative clauses containing a relative pronoun but not for those without a relative pronoun nor for reduce relative clauses. None of the structures were significantly correlated with age; however, frequencies of both relative clauses with and without relative pronouns were correlated with mean length of utterance. Kappa levels revealed that agreement between manual and automated coding was relatively high for each relative clause type and highest for relative clauses containing relative pronouns.
APA, Harvard, Vancouver, ISO, and other styles
8

Janis, Sarah Elizabeth. "A Comparison of Manual and Automated Grammatical Precoding on the Accuracy of Automated Developmental Sentence Scoring." BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/5892.

Full text
Abstract:
Developmental Sentence Scoring (DSS) is a standardized language sample analysis procedure that evaluates and scores a child's use of standard American-English grammatical rules within complete sentences. Automated DSS programs have the potential to increase the efficiency and reduce the amount of time required for DSS analysis. The present study examines the accuracy of one automated DSS software program, DSSA 2.0, compared to manual DSS scoring on previously collected language samples from 30 children between the ages of 2-5 and 7-11. Additionally, this study seeks to determine the source of error in the automated score by comparing DSSA 2.0 analysis given manually versus automatedly assigned grammatical tag input. The overall accuracy of DSSA 2.0 was 86%; the accuracy of individual grammatical category-point value scores varied greatly. No statistically significant difference was found between the two DSSA 2.0 input conditions (manual vs. automated tags) suggesting that the underlying grammatical tagging is not the primary source of error in DSSA 2.0 analysis.
APA, Harvard, Vancouver, ISO, and other styles
9

Redd, Nicole. "Automated Grammatical Analysis of Language Samples from Spanish-Speaking Children Learning English." BYU ScholarsArchive, 2006. https://scholarsarchive.byu.edu/etd/410.

Full text
Abstract:
Research has demonstrated that automated grammatical tagging is fast and accurate for both English and Spanish child language, but there has been no research done regarding its accuracy with bilingual children. The present study examined this topic using English and Spanish language samples taken from 254 children living in the United States. The subjects included school-aged children enrolled in public schools in the United States in grades 2, 3, or 5. The present study found high automated grammatical tagging accuracy scores for both English (M = 96.4%) and Spanish (M = 96.8%). The study suggests that automated grammatical analysis has potential to be a valuable tool for clinicians in the analysis of the language of bilingual children.
APA, Harvard, Vancouver, ISO, and other styles
10

Millet, Deborah. "Automated Grammatical Tagging of Language Samples from Children with and without Language Impairment." BYU ScholarsArchive, 2003. https://scholarsarchive.byu.edu/etd/1139.

Full text
Abstract:
Grammatical classification ("tagging") of words in language samples is a component of syntactic analysis for both clinical and research purposes. Previous studies have shown that probability-based software can be used to tag samples from adults and typically-developing children with high (about 95%) accuracy. The present study found that similar accuracy can be obtained in tagging samples from school-aged children with and without language impairment if the software uses tri-gram rather than bi-gram probabilities and large corpora are used to obtain probability information to train the tagging software.
APA, Harvard, Vancouver, ISO, and other styles
11

Hasting, Anne M. "Accuracy of Automated Analysis of Language Samples from Persons with Deafness or Hearing Impairment." BYU ScholarsArchive, 2008. https://scholarsarchive.byu.edu/etd/1334.

Full text
Abstract:
Developmental Sentence Scoring (DSS) and the Language Assessment, Remediation, and Screening Procedure (LARSP) are among the more common analyses for syntax and morphology, and automated versions of these analyses have been shown to be effective. This study measured the accuracy of automated DSS and LARSP on the written English output of six prelingually deaf young adults, ranging in age from 18 to 32 years. The samples were analyzed using the DSS and LARSP programs on Computerized Profiling; manual analysis was then performed on the samples. Point-by-point accuracy for DSS and for each level of LARSP was reported. Characteristics of the participants' language at the clause, phrase, and word levels were described and discussed, including the implications for clinicians working with this population.
APA, Harvard, Vancouver, ISO, and other styles
12

Ying, Lishi. "An automated direct sample insertion-inductively coupled plasma spectrometer for environmental sample analysis." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq39610.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Dugendre, Denys A. R. "Integration and operational strategy of a flexible automated system for sample analysis." Thesis, Middlesex University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.568478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Tanuan, Meyer C. "Automated Analysis of Unified Modeling Language (UML) Specifications." Thesis, University of Waterloo, 2001. http://hdl.handle.net/10012/1140.

Full text
Abstract:
The Unified Modeling Language (UML) is a standard language adopted by the Object Management Group (OMG) for writing object-oriented (OO) descriptions of software systems. UML allows the analyst to add class-level and system-level constraints. However, UML does not describe how to check the correctness of these constraints. Recent studies have shown that Symbolic Model Checking can effectively verify large software specifications. In this thesis, we investigate how to use model checking to verify constraints of UML specifications. We describe the process of specifying, translating and verifying UML specifications for an elevator example. We use the Cadence Symbolic Model Verifier (SMV) to verify the system properties. We demonstrate how to write a UML specification that can be easily translated to SMV. We propose a set of rules and guidelines to translate UML specifications to SMV, and then use these to translate a non-trivial UML elevator specification to SMV. We look at errors detected throughout the specification, translation and verification process, to see how well they reveal errors, ambiguities and omissions in the user requirements.
APA, Harvard, Vancouver, ISO, and other styles
15

Hughes, Andrea Nielson. "Automated Grammatical Tagging of Clinical Language Samples with and Without SALT Coding." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5889.

Full text
Abstract:
Language samples are naturalistic sources of information that supersede many of the limitations found in standardized test administration. Although language samples have clinical utility, they are often time intensive. Despite the usefulness of language samples in evaluation and treatment, clinicians may not perform language sample analyses due to the necessary time commitment. Researchers have developed language sample analysis software that automates this process. Coding schemes such as that used by the Systematic Analysis of Language Transcripts (SALT) software were developed to provide more information regarding appropriate grammatical tag selection. The usefulness of SALT precoding in aiding automated grammatical tagging accuracy was evaluated in this study. Results indicate consistent, overall improvement over an earlier version of the software at the tag level. The software was adept at coding samples from both developmentally normal and language impaired children. No significant differences between tagging accuracy of SALT coded versus non-SALT coded samples were found. As the accuracy of automated tagging software advances, the clinical usefulness of automated grammatical analyses improves, and thus the benefits of time savings may be realized.
APA, Harvard, Vancouver, ISO, and other styles
16

Strycharz, Theodore M. "Analysis of Defense Language Institute automated student questionnaire data." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1996. http://handle.dtic.mil/100.2/ADA319856.

Full text
Abstract:
Thesis (M.S. in Operations Research) Naval Postgraduate School, September 1996.
Thesis advisor(s): H.J. Larson. "September 1996." Includes bibliographical references (p. 39). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
17

Judson, Carrie Ann. "Accuracy of Automated Developmental Sentence Scoring Software." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1448.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Toscano, Jacqueline. "A comparison of language sample elicitation methods for dual language learners." Master's thesis, Temple University Libraries, 2017. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/467819.

Full text
Abstract:
Communication Sciences
M.A.
Language sample analysis has come to be considered the “gold standard” approach for cross-cultural language assessment. Speech-language pathologists assessing individuals of multicultural or multilinguistic backgrounds have been recommended to utilize this approach in these evaluations (e.g., Pearson, Jackson, & Wu, 2014; Heilmann & Westerveld, 2013). Language samples can be elicited with a variety of different tasks, and selection of a specific method by SLPs is often a major part of the assessment process. The present study aims to facilitate the selection of sample elicitation methods by identifying the method that elicits a maximal performance of language abilities and variation in children’s oral language samples. Analyses were performed on Play, Tell, and Retell methods across 178 total samples and it was found that Retell elicited higher measures of syntactic complexity (i.e., TTR, SI, MLUw) than Play as well as a higher TTR (i.e., lexical diversity) and SI (i.e., clausal density) than Tell; however, no difference was found between Tell and Retell for MLUw (i.e., syntactic complexity/productivity), nor was there a difference found between Tell and Play for TTR. Additionally, it was found that the two narrative methods elicited higher DDM (i.e., frequency of dialectal variation) than the Play method. No significant difference was found between Tell and Retell for DDM. Implications for the continued use of language sample for assessment of speech and language are discussed.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
19

Mooney, Aine M. "Language Sample Collection and Analysis in People Who Use AAC: A New Approach." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1554294907619342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Linke, Elizabeth A. "Assessing the Usage Ratings of an Automated Language Intervention." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1354295856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Stumpf, Fabian [Verfasser], and Roland [Akademischer Betreuer] Zengerle. "Automated microfluidic nucleic acid analysis for single-cell and sample-to-answer applications / Fabian Stumpf ; Betreuer: Roland Zengerle." Freiburg : Albert-Ludwigs-Universität Freiburg, 2017. http://d-nb.info/1126922102/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Sommers, Alexander Mitchell. "EXPLORING PSEUDO-TOPIC-MODELING FOR CREATING AUTOMATED DISTANT-ANNOTATION SYSTEMS." OpenSIUC, 2021. https://opensiuc.lib.siu.edu/theses/2862.

Full text
Abstract:
We explore the use a Latent Dirichlet Allocation (LDA) imitating pseudo-topic-model, based on our original relevance metric, as a tool to facilitate distant annotation of short (often one to two sentence or less) documents. Our exploration manifests as annotating tweets for emotions, this being the current use-case of interest to us, but we believe the method could be extended to any multi-class labeling task of documents of similar length. Tweets are gathered via the Twitter API using "track" terms thought likely to capture tweets with a greater chance of exhibiting each emotional class, 3,000 tweets for each of 26 topics anticipated to elicit emotional discourse. Our pseudo-topic-model is used to produce relevance-ranked vocabularies for each corpus of tweets and these are used to distribute emotional annotations to those tweets not manually annotated, magnifying the number of annotated tweets by a factor of 29. The vector labels the annotators produce for the topics are cascaded out to the tweets via three different schemes which are compared for performance by proxy through the competition of bidirectional-LSMTs trained using the tweets labeled at a distance. An SVM and two emotionally annotated vocabularies are also tested on each task to provide context and comparison.
APA, Harvard, Vancouver, ISO, and other styles
23

Kremser, Andreas [Verfasser], and Torsten Claus [Akademischer Betreuer] Schmidt. "Advances in automated sample preparation for gas chromatography : solid-phase microextraction, headspace-analysis, solid-phase extraction / Andreas Kremser ; Betreuer: Torsten Claus Schmidt." Duisburg, 2016. http://d-nb.info/1116941864/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Arendse, Danille. "Evaluating the structural equivalence of the English and isiXhosa versions of the Woodcock Munoz language survey on matched sample groups." Thesis, University of the Western Cape, 2009. http://hdl.handle.net/11394/3156.

Full text
Abstract:
The diversity embodying South Africa has emphasized the importance and influence of language in education and thus the additive bilingual programme is being implemented in the Eastern Cape by the ABLE project in order to realize the South African Language in education policy (LEiP).In accordance with this, the Woodcock Munoz Language Survey (which specializes in measuring cognitive academic language proficiency) was chosen as one of the instruments to evaluate the language outcomes of the programme and was adapted into South African English and isiXhosa.The current study was a subset of the ABLE project, and was located within the bigger project dealing with the translation of the WMLS into isiXhosa and the successive research on the equivalence of the two language versions. This study evaluated the structural equivalence of the English and isiXhosa versions of the WMLS on matched sample groups (n= 150 in each language group). Thus secondary data analysis (SDA) was conducted by analyzing the data in SPSS as well as CEFA (Comprehensive Exploratory Factor Analysis). The original data set was purposively sampled according to set selection criteria and consists of English and isiXhosa first language learners. The study sought to confirm previous research by cross-validating the results of structural equivalence on two subscales, namely the Verbal Analogies (VA) and Letter-Word Identification (LWI) subscale. The research design reflects psychometric test theory and is therefore located in a bias and equivalence theoretical framework. The results of the exploratory factor analysis found that one can only accept structural equivalence in the first factor identified in the VA subscale, while structural equivalence was found in the factor for the LWI subscale.The use of scatter-plots to validate the results of the exploratory factor analysis indicated that one can tentatively accept these results. The study thus contributed to the literature on the translation of the WMLS, and the adaptation of language tests into the indigenous languages of South Africa,as well as additive bilingual programmes.
Magister Artium (Psychology) - MA(Psych)
APA, Harvard, Vancouver, ISO, and other styles
25

Parashar, Ayush S. "Representation and Interpretation of Manual and Non-Manual Information for Automated American Sign Language Recognition." [Tampa, Fla.] : University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Singer, David. "The effect of instruction in computerized language sample analysis on the knowledge and comfort level of graduate student clinicians." Thesis, California State University, Long Beach, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=1523068.

Full text
Abstract:

This thesis describes a preexperimental, within-subject, pretest-posttest design used to measure the impact of an in-service training about computerized language sample analysis (CLSA) on the knowledge, comfort level, and implementation practices of21 graduate students in Communicative Disorders enrolled at California State University, Long Beach. Qualitative and quantitative data were collected through three surveys: one delivered during clinical practicum didactic sessions prior to the training, one on the day ofthe training, and one survey delivered 12 weeks post-training after the graduate student clinicians had an opportunity to use the computer program they learned about in the training. Results indicated that CLSA knowledge, comfort level and likelihood of implementation increased slightly immediately following the training, but were found to decline over time due to lack of exposure and practice. However, these results were not statistically significant. Findings are discussed as they relate to the current speech-language pathology literature, and possible avenues for further research into this area are explored.

APA, Harvard, Vancouver, ISO, and other styles
27

Anders, Lisa Mae. "Lab on a chip rare cell isolation platform with dielectrophoretic smart sample focusing, automated whole cell tracking analysis script, and a bioinspired on-chip electroactive polymer micropump." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/49614.

Full text
Abstract:
Dielectrophoresis (DEP), an electrokinetic force, is the motion of a polarizable particle in a non-uniform electric field. Contactless DEP (cDEP) is a recently developed cell sorting and isolation technique that uses the DEP force by capacitavely coupling the electrodes across the channel. The cDEP platform sorts cells based on intrinsic biophysical properties, is inexpensive, maintains a sterile environment by using disposable chips, is a rapid process with minimal sample preparation, and allows for immediate downstream recovery. This platform is highly competitive compared to other cell sorting techniques and is one of the only platforms to sort cells based on phenotype, allowing for the isolation of unique cell populations not possible in other systems. The original purpose of this work was to determine differences in the bioelectrical fingerprint between several critical cancer types. Results demonstrate a difference between Tumor Initiating Cells, Multiple Drug Resistant Cells, and their bulk populations for experiments conducted on three prostate cancer cell lines and treated and untreated MOSE cells. However, three significant issues confounded these experiments and challenged the use of the cDEP platform. The purpose of this work then became the development of solutions to these barriers and presenting a more commercializable cDEP platform. An improved analysis script was first developed that performs whole cell detection and cell tracking with an accuracy of 93.5%. Second, a loading system for doing smart sample handling, specifically cell focusing, was developed using a new in-house system and validated. Experimental results validated the model and showed that cells were successfully focused into a tight band in the middle of the channel. Finally, a proof of concept for an on-chip micropump is presented and achieved 4.5% in-plane deformation. When bonded over a microchannel, fluid flow was induced and measured. These solutions present a stronger, more versatile cDEP platform and make for a more competitive commercial product. However, these solutions are not just limited to the cDEP platform and may be applicable to multitudes of other microfluidic devices and applications.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
28

Sunil, Kamalakar FNU. "Automatically Generating Tests from Natural Language Descriptions of Software Behavior." Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/23907.

Full text
Abstract:
Behavior-Driven Development (BDD) is an emerging agile development approach where all stakeholders (including developers and customers) work together to write user stories in structured natural language to capture a software application's functionality in terms of re- quired "behaviors". Developers then manually write "glue" code so that these scenarios can be executed as software tests. This glue code represents individual steps within unit and acceptance test cases, and tools exist that automate the mapping from scenario descriptions to manually written code steps (typically using regular expressions). Instead of requiring programmers to write manual glue code, this thesis investigates a practical approach to con- vert natural language scenario descriptions into executable software tests fully automatically. To show feasibility, we developed a tool called Kirby that uses natural language processing techniques, code information extraction and probabilistic matching to automatically gener- ate executable software tests from structured English scenario descriptions. Kirby relieves the developer from the laborious work of writing code for the individual steps described in scenarios, so that both developers and customers can both focus on the scenarios as pure behavior descriptions (understandable to all, not just programmers). Results from assessing the performance and accuracy of this technique are presented.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
29

Goncalves, Joao Rafael Landeiro De sousa. "Impact analysis in description logic ontologies." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/impact-analysis-in-description-logic-ontologies(87ee476a-c690-44b5-bd4c-b9afbdf7a0a0).html.

Full text
Abstract:
With the growing popularity of the Web Ontology Language (OWL) as a logic-based ontology language, as well as advancements in the language itself, the need for more sophisticated and up-to-date ontology engineering services increases as well. While, for instance, there is active focus on new reasoners and optimisations, other services fall short of advancing at the same rate (it suffices to compare the number of freely-available reasoners with ontology editors). In particular, very little is understood about how ontologies evolve over time, and how reasoners’ performance varies as the input changes. Given the evolving nature of ontologies, detecting and presenting changes (via a so-called diff) between them is an essential engineering service, especially for version control systems or to support change analysis. In this thesis we address the diff problem for description logic (DL) based ontologies, specifically OWL 2 DL ontologies based on the SROIQ DL. The outcomes are novel algorithms employing both syntactic and semantic techniques to, firstly, detect axiom changes, and what terms had their meaning affected between ontologies, secondly, categorise their impact (for example, determining that an axiom is a stronger version of another), and finally, align changes appropriately, i.e., align source and target of axiom changes (so the stronger axiom with the weaker one, from our example), and axioms with the terms they affect. Subsequently, we present a theory of reasoner performance heterogeneity, based on field observations related to reasoner performance variability phenomena. Our hypothesis is that there exist two kinds of performance behaviour: an ontology/reasoner combination can be performance-homogeneous or performance-heterogeneous. Finally, we verify that performance-heterogeneous reasoner/ontology combinations contain small, performance-degrading sets of axioms, which we call hot spots. We devise a performance hot spot finding technique, and show that hot spots provide a promising basis for engineering efficient reasoners.
APA, Harvard, Vancouver, ISO, and other styles
30

Mulalo, Mpilo. "Validation of the students’ life satisfaction scale among a sample of children in south africa: multi-group analysis across three language groups." University of the Western Cape, 2020. http://hdl.handle.net/11394/7572.

Full text
Abstract:
Magister Artium (Psychology) - MA(Psych)
While research into children’s subjective well-being (SWB) has advanced over the past decade, there is a paucity of cross-cultural research, particularly in South Africa. Moreover, while the adaptation and validation of instruments in English and Afrikaans are evident, other language groups have not received much attention. This study aimed to provide structural validation of the Students’ Life Satisfaction Scale across a sample of children in South Africa using multi-group analysis across three language groups (Setswana, Xitsonga, and Tshivenda). Within this process, the study aimed to use multi-group confirmatory factor analysis (MGCFA) to compare the structural validity and measurement invariance of the three language groups. Finally, the study aimed to determine the convergent validity of the three language groups of the SLSS by regressing them onto the single-item Overall Life Satisfaction Scale (OLS). The study uses data from Wave 3 of the South African Children’s Worlds Study and included a sample of 625 children across the language groups (Setswana: n = 187; Sesotho: n = 170; and Tshivenda: n = 268). For the overall pooled sample an excellent fit was obtained for a single-factor model, including one error-covariance. Standardised regression weights of the items ranged between .43 and .73. MGCFA revealed an acceptable fit for the configural model (unconstrained loadings); however, metric (constrained loadings) and scalar invariance (constrained loadings and intercepts) was not tenable. However, through the application of partial constraints metric invariance was tenable when Item 5 (I like my life) was freely estimated, while scalar invariance was tenable when Item 1 (I enjoy my life) and Item 5 (I like my life) were freely estimated. The results suggest that the Items: My life is going well; I have a good life; The things in my life are excellent; and I am happy with my life, are comparable by correlations, regression coefficients, and latent mean scores across the three language groups. Convergent validity using the OLS was obtained for the pooled sample and across the language groups. The key contribution of the study is establishing that the Setswana, Sesotho, and Tshivenda translated and adapted versions of the SLSS are valid for use within the South African context to measure children’s SWB, and that they can be grouped together in an overall pooled sample.
APA, Harvard, Vancouver, ISO, and other styles
31

Herz, Alexander [Verfasser], Helmut [Akademischer Betreuer] Seidl, and Sebastian [Akademischer Betreuer] Hack. "Programming Language Design, Analysis and Implementation for Automated and Effective Program Parallelization / Alexander Herz. Betreuer: Helmut Seidl. Gutachter: Helmut Seidl ; Sebastian Hack." München : Universitätsbibliothek der TU München, 2015. http://d-nb.info/1079654941/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Birston, Paul. "Jesus' powerful use of language for effective preaching a sample rhetorical analysis of his hyperbole for the judging hypocrite (Matthew 7:1-5) /." Theological Research Exchange Network (TREN), 2007. http://www.tren.com/search.cfm?p018-0110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Heinz, Peter Josef. "Towards enhanced, authentic second language reading comprehension assessment, research, and theory building : the development and analysis of an automated recall protocol scoring system /." Connect to resource, 1993. http://rave.ohiolink.edu/etdc/view.cgi?acc%5Fnum=osu1244829655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Salov, Aleksandar. "Towards automated learning from software development issues : Analyzing open source project repositories using natural language processing and machine learning techniques." Thesis, Linnéuniversitetet, Institutionen för medieteknik (ME), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-66834.

Full text
Abstract:
This thesis presents an in-depth investigation on the subject of how natural language processing and machine learning techniques can be utilized in order to perform a comprehensive analysis of programming issues found in different open source project repositories hosted on GitHub. The research is focused on examining issues gathered from a number of JavaScript repositories based on their user generated textual description. The primary goal of the study is to explore how natural language processing and machine learning methods can facilitate the process of identifying and categorizing distinct issue types. Furthermore, the research goes one step further and investigates how these same techniques can support users in searching for potential solutions to these issues. For this purpose, an initial proof-of-concept implementation is developed, which collects over 30 000 JavaScript issues from over 100 GitHub repositories. Then, the system extracts the titles of the issues, cleans and processes the data, before supplying it to an unsupervised clustering model which tries to uncover any discernible similarities and patterns within the examined dataset. What is more, the main system is supplemented by a dedicated web application prototype, which enables users to utilize the underlying machine learning model in order to find solutions to their programming related issues. Furthermore, the developed implementation is meticulously evaluated through a number of measures. First of all, the trained clustering model is assessed by two independent groups of external reviewers - one group of fellow researchers and another group of practitioners in the software industry, so as to determine whether the resulting categories contain distinct types of issues. Moreover, in order to find out if the system can facilitate the search for issue solutions, the web application prototype is tested in a series of user sessions with participants who are not only representative of the main target group which can benefit most from such a system, but who also have a mixture of both practical and theoretical backgrounds. The results of this research demonstrate that the proposed solution can effectively categorize issues according to their type, solely based on the user generated free-text title. This provides strong evidence that natural language processing and machine learning techniques can be utilized for analyzing issues and automating the overall learning process. However, the study was unable to conclusively determine whether these same methods can aid the search for issue solutions. Nevertheless, the thesis provides a detailed account of how this problem was addressed and can therefore serve as the basis for future research.
APA, Harvard, Vancouver, ISO, and other styles
35

Paterson, Kimberly Laurel Ms. "TSPOONS: Tracking Salience Profiles Of Online News Stories." DigitalCommons@CalPoly, 2014. https://digitalcommons.calpoly.edu/theses/1222.

Full text
Abstract:
News space is a relatively nebulous term that describes the general discourse concerning events that affect the populace. Past research has focused on qualitatively analyzing news space in an attempt to answer big questions about how the populace relates to the news and how they respond to it. We want to ask when do stories begin? What stories stand out among the noise? In order to answer the big questions about news space, we need to track the course of individual stories in the news. By analyzing the specific articles that comprise stories, we can synthesize the information gained from several stories to see a more complete picture of the discourse. The individual articles, the groups of articles that become stories, and the overall themes that connect stories together all complete the narrative about what is happening in society. TSPOONS provides a framework for analyzing news stories and answering two main questions: what were the important stories during some time frame and what were the important stories involving some topic. Drawing technical news stories from Techmeme.com, TSPOONS generates profiles of each news story, quantitatively measuring the importance, or salience, of news stories as well as quantifying the impact of these stories over time.
APA, Harvard, Vancouver, ISO, and other styles
36

Dyremark, Johanna, and Caroline Mayer. "Bedömning av elevuppsatser genom maskininlärning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-262041.

Full text
Abstract:
Betygsättning upptar idag en stor del av lärares arbetstid och det finns en betydande inkonsekvens vid bedömning utförd av olika lärare. Denna studie ämnar undersöka vilken träffsäkerhet som en automtiserad bedömningsmodell kan uppnå. Tre maskininlärningsmodeller för klassifikation i form av Linear Discriminant Analysis, K-Nearest Neighbor och Random Forest tränas och testas med femfaldig korsvalidering på uppsatser från nationella prov i svenska. Klassificeringen baseras på språk och formrelaterade attribut inkluderande ord och teckenvisa längdmått, likhet med texter av olika formalitetsgrad och grammatikrelaterade mått. Detta utmynnar i ett maximalt quadratic weighted kappa-värde på 0,4829 och identisk överensstämmelse med expertgivna betyg i 57,53 % av fallen. Dessa resultat uppnåddes av en modell baserad på Linear Discriminant Analysis och uppvisar en högre korrelation med expertgivna betyg än en ordinarie lärare. Trots pågående digitalisering inom skolväsendet kvarstår ett antal hinder innan fullständigt maskininlärningsbaserad bedömning kan realiseras, såsom användarnas inställning till tekniken, etiska dilemman och teknikens svårigheter med förståelse av semantik. En delvis integrerad automatisk betygssättning har dock potential att identifiera uppsatser där behov av dubbelrättning föreligger, vilket kan öka överensstämmelsen vid storskaliga prov till en låg kostnad.
Today, a large amount of a teacher’s workload is comprised of essay scoring and there is a large variability between teachers’ gradings. This report aims to examine what accuracy can be acceived with an automated essay scoring system for Swedish. Three following machine learning models for classification are trained and tested with 5-fold cross-validation on essays from Swedish national tests: Linear Discriminant Analysis, K-Nearest Neighbour and Random Forest. Essays are classified based on 31 language structure related attributes such as token-based length measures, similarity to texts with different formal levels and use of grammar. The results show a maximal quadratic weighted kappa value of 0.4829 and a grading identical to expert’s assessment in 57.53% of all tests. These results were achieved by a model based on Linear Discriminant Analysis and showed higher inter-rater reliability with expert grading than a local teacher. Despite an ongoing digitilization within the Swedish educational system, there are a number of obstacles preventing a complete automization of essay scoring such as users’ attitude, ethical issues and the current techniques difficulties in understanding semantics. Nevertheless, a partial integration of automatic essay scoring has potential to effectively identify essays suitable for double grading which can increase the consistency of large-scale tests to a low cost.
APA, Harvard, Vancouver, ISO, and other styles
37

Seal, Amy. "Scoring Sentences Developmentally: An Analog of Developmental Sentence Scoring." BYU ScholarsArchive, 2002. https://scholarsarchive.byu.edu/etd/1141.

Full text
Abstract:
A variety of tools have been developed to assist in the quantification and analysis of naturalistic language samples. In recent years, computer technology has been employed in language sample analysis. This study compares a new automated index, Scoring Sentences Developmentally (SSD), to two existing measures. Eighty samples from three corpora were manually analyzed using DSS and MLU and the processed by the automated software. Results show all three indices to be highly correlated, with correlations ranging from .62 to .98. The high correlations among scores support further investigation of the psychometric characteristics of the SSD software to determine its clinical validity and reliability. Results of this study suggest that SSD has the potential to compliment other analysis procedures in assessing the language development of young children.
APA, Harvard, Vancouver, ISO, and other styles
38

Silveira, Gabriela. "Narrativas produzidas por indivíduos afásicos e indivíduos cognitivamente sadios: análise computadorizada de macro e micro estrutura." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/5/5170/tde-01112018-101055/.

Full text
Abstract:
INTRODUÇÃO: O tema de investigação, discurso de afásicos, fornece informações importantes sobre aspectos fonológicos, morfológicos, sintáticos, semânticos e pragmáticos da linguagem de pacientes que sofreram lesão vascular cerebral. Uma das maneiras de estudar o discurso é por meio de cenas figurativas temáticas simples ou em sequência. A sequência da história de \"Cinderela\" é frequentemente utilizada em estudos, por ser familiar em todo o mundo, o que favorece estudos transculturais; por induzir a produção de narrativas, ao invés de descrições, frequentemente obtidas quando se utiliza prancha única para eliciar discursos. Outra vantagem do uso das sequências da \"Cinderela\" é o fato de gerar material linguístico em quantidade suficiente para análise detalhada. OBJETIVOS: (1) analisar, por meio de tecnologias computadorizadas, aspectos macro e microestruturais do discurso de indivíduos sadios do ponto de vista cognitivo, afásicos de Broca e afásicos anômicos; (2) explorar o discurso como indicador de evolução da afasia; (3) analisar a contribuição do SPECT para verificação de evolução da afasia junto ao discurso. MÉTODO: Participaram do estudo oito indivíduos afásicos de Broca e anômicos que compuseram o grupo do estudo longitudinal (G1), 15 indivíduos afásicos de Broca e anômicos que compuseram o outro grupo de estudo (G2) e 30 cognitivamente sadios (GC). Os participantes foram solicitados a examinar as cenas da história \"Cinderela\" e depois recontar a história, com suas palavras. Foram exploradas tecnologias computadorizadas e analisados aspectos macro e microestruturais dos discursos produzidos. Para o G1, tivermos a particularidade de coleta de discurso também pela prancha \"Roubo dos Biscoitos\", análise do exame SPECT e acompanhamento longitudinal por um período de seis meses. RESULTADOS: Comparando o GC e o G2, em relação à macroestrutura, notou-se que os afásicos do G2 se diferenciaram significativamente do GC em todas as proposições e, em relação à microestrutura, sete métricas foram capazes de diferenciar ambos os grupos. Houve diferença significante macro e micro estrutural entre os sujeitos afásicos de Broca e anômicos. Foi possível verificar diferenças em medidas da macro e da microestrutura no G1 com o avançar do tempo de lesão após AVC. A história da \"Cinderela\" forneceu dados de microestrutura mais completos do que a prancha \"Roubo dos Biscoitos\". Os resultados do SPECT permaneceram os mesmos, sem demonstração de mudança com a evolução da afasia. CONCLUSÃO: A produção de narrativa gerou material para análise de macroestrutura e microestrutura, tanto aspectos de macro quanto de microestrutura diferenciaram indivíduos cognitivamente sadios dos sujeitos afásicos. A análise do discurso da \"Cinderela\" serviu como instrumento para mensurar a melhora da linguagem dos sujeitos afásicos. O uso da ferramenta computacional auxiliou as análises discursivas
INTRODUCTION: The aphasic discourse analysis provides important information about the phonological, morphological, syntactic, semantic and pragmatic aspects of the language of patients who have suffered a stroke. The evaluation of the discourse, along with other methods, can contribute to observation of the evolution of the language and communication of aphasic patients; however, manual analysis is laborious and can lead to errors. OBJECTIVES: (1) to analyze, by computerized technologies, macro and microstructural aspects of the discourse of healthy cognitive individuals, Broca\'s and anomic aphasics; (2) to explore the discourse as indicator of the evolution of aphasia; (3) to analyze the contribution of single photon emission computed tomography (SPECT) to verify the correlation between behavioral and neuroimaging evolution data. METHOD: Two groups of patients were studied: GA1, consisting of eight individuals with Broca\'s aphasia and anomic aphasia, who were analyzed longitudinally from the sub-acute phase of the lesion and after three and six months; GA2 composed of 15 individuals with Broca\'s and anomic aphasia, with varying times of stroke installation and GC consisting of 30 cognitively healthy participants. Computerized technologies were explored for the analysis of metrics related to the micro and macrostructure of discourses uttered from Cinderela history and Cookie Theft picture. RESULTS: Comparing the GC and GA2, in relation to the discourse macrostructure, it was observed that the GA2 aphasics differed significantly from the GC in relation to the total number of propositions emitted; considering the microstructure, seven metrics differentiated both groups. There was a significant difference in the macro and microstructure between the discourses of Broca\'s aphasic subjects and anomic ones. It was possible to verify differences in macro and microstructure measurements in GA1 with the advancement of injury time. In GA1, the comparison between parameters in the sub-acute phase and after 6 months of stroke revealed differences in macrostructure - increase in the number of propositions of the orientation block and of the total propositions. Regarding the microstructure, the initial measures of syllable metrics by word content, incidence of nouns and incidence of content words differed after 6 months of intervention. The variable incidence of missing words in the dictionary showed a significantly lower value after three months of stroke. Cinderella\'s story provided more complete microstructure data than the Cookie Theft picture. There was no change in SPECT over time, without demonstration of change with the evolution of aphasia. CONCLUSION: The discourse produced from the history of Cinderella and the Cookie Theft picture generated material for macrostructure and microstructure analysis of cognitively healthy and aphasic individuals, made it possible to quantify and qualify the evolution of language in different phases of stroke recuperation and distinguished the behavior of healthy and with Broca´s and anomic aphasia, in macro and microstructure aspects. The exploration of computerized tools facilitated the analysis of the data in relation to the microstructure, but it was not applicable to the macrostructure, demonstrating that there is a need for tool adjustments for the discourse analysis of patients. SPECT data did not reflect the behavioral improvement of the language of aphasic subjects
APA, Harvard, Vancouver, ISO, and other styles
39

Copple, Blake Robert. "Development of a fully automated rapid irradiated sample transport system for neutron activation analysis." Thesis, 2014. http://hdl.handle.net/2152/28282.

Full text
Abstract:
The need for trace, minor and main element analysis becomes more prevalent each year with an every expanding variety of applications. Neutron Activation Analysis (NAA) is an attractive non-destructive analysis tool that can be utilized on small samples regardless of what physical state the material is in. The analysis process however, typically requires researchers to physically handle a radioactive sample in order to transport the sample to detection systems for data gathering. The purpose of this project was to design a Fully Automated Rapid Irradiated Sample Transit (FARIST) system that could deliver samples into a reactor core and then transfer them to a detector for analysis with zero human interaction. The system would be designed to hold up to 30 samples prior to analysis with the irradiation, decay, and counting times programmed in initially so that once analysis was initiated, no user interaction was required for the next 29 samples. The last requirement of the system was that it supports cyclic NAA. This work discusses the science and history behind NAA as well as the design, construction, installation, and testing of the new FARIST system.
text
APA, Harvard, Vancouver, ISO, and other styles
40

Hsu, Chia-En, and 許嘉恩. "The Design and Implementation of an Automated Platform to Verify Trading Programs’ In-Sample Out-of-Sample Robustness Using Walk-Forward Analysis." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/2u875g.

Full text
Abstract:
碩士
國立中央大學
資訊管理學系
106
In recent years, in the field of stock market and futures investment, it is more common to use program trading to place orders automatically. However, it is difficult to avoid curve-fitting when the program trading strategy is conducting back-testing owing to over using historical data. In past research topics related to trading strategies, there is no complete verification process for the effectiveness and stability of trading strategies. In most of the related research, only discusses and focuses on the changes of the trading strategy parameters, the comparison of different trading strategies, the selection of different symbol, and so on. However, these analysis methods do not help the actual transaction. In fact, we must to use walk forward analysis to continuously evaluate strategic performance within the every period of window to avoid the problem of over optimize. In order to solve the above problems, this research designs a complete strategy verification process and implements a platform based on this process. This platform can be used with common program trading platforms (e.g. Multicharts, Amibroker) to automate verification the effectiveness and stability of the strategy, and help investors find the suitable parameters before the actual order transaction.
APA, Harvard, Vancouver, ISO, and other styles
41

Tsai, I.-Fang, and 蔡宜芳. "A Study of Chinese Language Sample Analysis for 3-5 Years Old Children." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/20595608719839220513.

Full text
Abstract:
碩士
臺北市立教育大學
溝通障礙碩士學位學程
97
The purpose of this study was to develop a set of Chinese language sample analysis procedure for clinical language assessment with the reliability and validity. The two major aspects of language sample analysis were semantic and syntax. This study explored the indexes of semantic and syntax development. Participants were one hundred and forty-two children age 3 to 5 years old from three public nurseries in Taipei City. The language sample collection procedure was designed to have four context including conversation (about school and home life respectively), story-retelling, and freeplay. The language samples were transcribed and analyzed by CHILDES (Child Language Data Exchange System)-CHAT(codes for the human analysis of transcripts) and CLAN program. The t-test(independent samples) and Pearson product-moment correlation coefficient were conducted to examine construct validity and concurrent validity. The results revealed that the interlude reliability in MLU-c(mean length of utterance in character) is better than MLU-w(MLU in word) because of undefined Chinese word segmentation. Besides, MLU-c and MLU-w have extreme high correlations. These findings proposed MLU-c may be a more practical language index on clinical. According to the results, MLU-c, MLU-w, the mean length of the longest utterances in character (MLU5-c), the mean length of the longest utterances in word (MLU5-w), lexical diversity(D), the quantity of preposition-conjunction function word (PC), and corrected type token ratio (CTTR) are valid indexes of language development. In addition, this study also demonstrated there was no significant relevance between the indexes of semantic and syntax development, that is, these two aspects measure different language abilities. Therefore multiple aspects of language sample analysis are necessary to explain child's language ability objectively and completely.
APA, Harvard, Vancouver, ISO, and other styles
42

Gruzd, Anatoliy A., and Caroline Haythornthwaite. "Automated Discovery and Analysis of Social Networks from Threaded Discussions." 2008. http://hdl.handle.net/10150/105081.

Full text
Abstract:
To gain greater insight into the operation of online social networks, we applied Natural Language Processing (NLP) techniques to text-based communication to identify and describe underlying social structures in online communities. This paper presents our approach and preliminary evaluation for content-based, automated discovery of social networks. Our research question is: What syntactic and semantic features of postings in a threaded discussions help uncover explicit and implicit ties between network members, and which provide a reliable estimate of the strengths of interpersonal ties among the network members? To evaluate our automated procedures, we compare the results from the NLP processes with social networks built from basic who-to-whom data, and a sample of hand-coded data derived from a close reading of the text. For our test case, and as part of ongoing research on networked learning, we used the archive of threaded discussions collected over eight iterations of an online graduate class. We first associate personal names and nicknames mentioned in the postings with class participants. Next we analyze the context in which each name occurs in the postings to determine whether or not there is an interpersonal tie between a sender of the posting and a person mentioned in it. Because information exchange is a key factor in the operation and success of a learning community, we estimate and assign weights to the ties by measuring the amount of information exchanged between each pair of the nodes; information in this case is operationalized as counts of important concept terms in the postings as derived through the NLP analyses. Finally, we compare the resulting network(s) against those derived from other means, including basic who-to-whom data derived from posting sequences (e.g., whose postings follow whose). In this comparison we evaluate what is gained in understanding network processes by our more elaborate analyses.
APA, Harvard, Vancouver, ISO, and other styles
43

Cerveira, João Miguel dos Santos. "Automated Metrics System to Support Software Development Process with Natural Language Assistant." Master's thesis, 2017. http://hdl.handle.net/10316/83083.

Full text
Abstract:
Dissertação de Mestrado em Engenharia Informática apresentada à Faculdade de Ciências e Tecnologia
A Whitesmith é uma empresa de produtos e consultoria de desenvolvimento de software, que recorre a várias ferramentas de monitorização para auxiliar no seu processo de desenvolvimento de produtos.Para que este método seja bem aplicado, é necessário a existência de vários repositórios de dados sobre todo o planeamento e monitorização de desenvolvimento. Esta informação tem de estar guardada em ferramentas de fácil alcance e de rápida compreensão. Posto esta necessidade de alojamento de dados, começaram a surgir, no mercado, várias ferramentas com a capacidade de guardar e manipular informação, de modo a ajudar no desenvolvimento de software.Com o crescimento da empresa, seguiu-se uma grande quantidade de informação distribuída em várias destas ferramentas. Para ser possível fazer uma análise ao desenvolvimento de um determinado projeto, é necessário procurar informação e introduzi-la manualmente. Assim, surgiu a necessidade de criar uma solução para este problema que, não só consiga recolher toda a informação, mas que também execute uma análise ao estado de desenvolvimento de todos os projetos. Para não criar atrito no processo de desenvolvimento, vai ser necessário que a solução contenha o mínimo de interacção humano-computacional, sendo que todo o seu processo seja automatizado.A única interacção requisitada pela empresa, foi a integração de um assistente de linguagem natural na plataforma de comunicação usada por todos os membros, com a finalidade de melhorar a usabilidade na recolha de informação.
Whitesmith is a software development and product consulting company that uses a variety of monitoring tools to aid in its product development process.For this method to be well implemented, it's necessary to have several data repositories on all development planning and monitoring. This information must be stored in tools that are easy to reach and quick to understand. With this need for data, several tools with the ability to store and manipulate information have started to appear in the market in order to aid in the development of software.Since the company is growing, a large amount of information is distributed between this tools, so, to be able to make an analysis of a certain project development stage, it's necessary to look for information and to introduce it manually. Thus, the need to create a solution to this problem arose, that not only can collected all the information, but also perform an analysis of the development status of all its projects.To not create friction in the development process, it will be necessary for the solution to contain the minimum human-computational interaction, and the entire needs to be processed is automatically. The only interaction required by the company was the integration of a natural language assistant in the communication platform used by all members, in order to improve the usability of information collection. This communication should be made by both sides depending on the subject of the metric in question, creating the perfect atmosphere.
APA, Harvard, Vancouver, ISO, and other styles
44

Zhang, Lei. "DASE: Document-Assisted Symbolic Execution for Improving Automated Test Generation." Thesis, 2014. http://hdl.handle.net/10012/8532.

Full text
Abstract:
Software testing is crucial for uncovering software defects and ensuring software reliability. Symbolic execution has been utilized for automatic test generation to improve testing effectiveness. However, existing test generation techniques based on symbolic execution fail to take full advantage of programs’ rich amount of documentation specifying their input constraints, which can further enhance the effectiveness of test generation. In this paper we present a general approach, Document-Assisted Symbolic Execution (DASE), to improve automated test generation and bug detection. DASE leverages natural language processing techniques and heuristics to analyze programs’ readily available documentation and extract input constraints. The input constraints are then used as pruning criteria; inputs far from being valid are trimmed off. In this way, DASE guides symbolic execution to focus on those inputs that are semantically more important. We evaluated DASE on 88 programs from 5 mature real-world software suites: GNU Coreutils, GNU findutils, GNU grep, GNU Binutils, and elftoolchain. Compared to symbolic execution without input constraints, DASE increases line coverage, branch coverage, and call coverage by 5.27–22.10%, 5.83–21.25% and 2.81–21.43% respectively. In addition, DASE detected 13 previously unknown bugs, 6 of which have already been confirmed by the developers.
APA, Harvard, Vancouver, ISO, and other styles
45

Arends, Danille. "Evaluating the structural equivalence of the English and isiXhosa versions of the Woodcock Munoz Language Survey on matched sample groups." Thesis, 2009. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_8768_1360926289.

Full text
Abstract:

The diversity embodying South Africa has emphasized the importance and influence of language in education and thus the additive bilingual programme is being implemented in the Eastern Cape by the ABLE project in order to realize the South African Language in education policy (LEiP). In accordance with this, the Woodcock Munoz Language Survey (which specializes in measuring cognitive academic language proficiency) was chosen as one of the instruments to evaluate the language outcomes of the programme and was adapted into South African English and isiXhosa. The current study was a subset of the ABLE project, and was located within the bigger project dealing with the translation of the WMLS into isiXhosa and the successive research on the equivalence of the two language versions. This study evaluated the structural equivalence of the English and isiXhosa versions of the WMLS on matched sample groups (n= 150 in each language group). Thus secondary data analysis (SDA) was conducted by analyzing the data in SPSS as well as CEFA (Comprehensive Exploratory Factor Analysis). The original data set was purposively sampled according to set selection criteria and consists of English and isiXhosa first language learners. The study sought to confirm previous research by cross-validating the results of structural equivalence on two subscales, namely the Verbal Analogies (VA) and Letter-Word Identification (LWI) subscale. The research design reflects psychometric test theory and is therefore located in a bias and equivalence theoretical framework. The results of the exploratory factor analysis found that one can only accept structural equivalence in the first factor identified in the VA subscale, while structural equivalence was found in the factor for the LWI subscale. The use of scatter-plots to validate the results of the exploratory factor analysis indicated that one can tentatively accept these results. The study thus contributed to the literature on the translation of the WMLS, and the adaptation of language tests into the indigenous languages of South Africa,as well as additive bilingual programmes.

APA, Harvard, Vancouver, ISO, and other styles
46

Erturk, Gamze. "The influence of interpersonal behaviors and social categories on language use in virtual teams." Thesis, 2012. http://hdl.handle.net/2152/ETD-UT-2012-05-5627.

Full text
Abstract:
As increasing number of organizations are using virtual teams, communication scholars have started to pay more attention to these relatively new forms of work. Past studies explored interpersonal (i.e., trust, attraction) and group dynamics (i.e., conformity, subgrouping) in virtual teams. Despite the documented effects of interpersonal behaviors and social categories on virtual group dynamics, there is a substantial gap in how these two factors influence language use in virtual teams. To shed light on this neglected area of research, this dissertation examined how teammates’ interpersonal behaviors and social categories affected language use in virtual team collaborations. 164 participants interacted in four-person teams using a synchronous chat program. The age of participants ranged from 18 to 24. 58% of participants were female and 42% were male. Participants used Windows Live Messenger to complete Straus & McGrath’s (1994) decision making task. Upon completing the task, participants filled out social attraction and social identification scales to be used for manipulation checks. Decision making sessions for each group were saved and Linguistic Inquiry and Word Count Program (LIWC) was used to examine language use. Linguistic style accommodation was measured using language style matching (LSM) metric. LSM measured the degree to which group members used similar language patterns. It was calculated by averaging the absolute difference scores for nine function word categories generated by LIWC. Similarly, linguistic markers such as word counts, negations, assents, and pronouns were acquired through LIWC output. The results suggested that having a dissenting member in the group was associated with higher linguistic style accommodation compared to having an assenting member. This result contradicted with the assumptions of communication accommodation theory (Giles, Mulac, Bradac, & Johnson, 1987), yet provided evidence for the validity of minority influence theory (Moscovici, Lage, & Naffrechoux, 1969) in virtual teams. Unexpectedly, there was no significant effect of social categories on linguistic style accommodation. The results also showed that negative behaviors were strongly associated with increased word counts, negations and the second person singular pronouns, whereas positive behaviors were associated with increased use of assents, tentative language, first person plural and singular pronouns.
text
APA, Harvard, Vancouver, ISO, and other styles
47

WANG, YA-LING, and 王雅玲. "Comparing the language teaching material text subject consciousness analysis research by the 1th-and 2 th -granders in Taiwan Kang Xuanban “the national language”and China Person Teaches the version “the language” for the sample to compare." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/04480775307143017932.

Full text
Abstract:
碩士
國立新竹教育大學
語文學系碩士班
95
Taiwan and China in the language teaching material aspect, picks line of outline many this policies, the language teaching material content all tends to the upsweep, the multiplication. This research publishes the version language teaching material textbook by Taiwan Kang Xuan, educates the publishing house language teaching material textbook with mainland China people, for main object of study. Carries on the analysis comparison in view of text content subject consciousness, uses subject consciousness for home education, political consciousness education, science and environment education, sex equality education, moral education, esthetics education,multicultural education and so on to. Picks the standard quality, the quota content analysis methodology, discusses both banks to compile the similarities and differences thoroughly in the language teaching material text content.   This research picks in the content analytic method the grade letter, with the content connection validity is the validity standard, this research goal for discusses the 1th-and 2 th -granders teaching material curriculum goal and the teaching goal, analyzes subject consciousness which various classes contain, with various classes subject consciousness class number; And this subject consciousness teaching material, occupies number of units in its year section total teaching material percentage, inquires into each kind of subject consciousness present with the arrangement order. Discusses Taiwan and China separately to the language teaching material in similarities and differences of the subject consciousness edition, understood thoroughly each kind of subject consciousness integrates the situation in the language teaching material, will provide reference the future language teaching material compilation content choice, will promote and strengthens the language education the integrity development.
APA, Harvard, Vancouver, ISO, and other styles
48

"Analysis and Decision-Making with Social Media." Doctoral diss., 2019. http://hdl.handle.net/2286/R.I.54830.

Full text
Abstract:
abstract: The rapid advancements of technology have greatly extended the ubiquitous nature of smartphones acting as a gateway to numerous social media applications. This brings an immense convenience to the users of these applications wishing to stay connected to other individuals through sharing their statuses, posting their opinions, experiences, suggestions, etc on online social networks (OSNs). Exploring and analyzing this data has a great potential to enable deep and fine-grained insights into the behavior, emotions, and language of individuals in a society. This proposed dissertation focuses on utilizing these online social footprints to research two main threads – 1) Analysis: to study the behavior of individuals online (content analysis) and 2) Synthesis: to build models that influence the behavior of individuals offline (incomplete action models for decision-making). A large percentage of posts shared online are in an unrestricted natural language format that is meant for human consumption. One of the demanding problems in this context is to leverage and develop approaches to automatically extract important insights from this incessant massive data pool. Efforts in this direction emphasize mining or extracting the wealth of latent information in the data from multiple OSNs independently. The first thread of this dissertation focuses on analytics to investigate the differentiated content-sharing behavior of individuals. The second thread of this dissertation attempts to build decision-making systems using social media data. The results of the proposed dissertation emphasize the importance of considering multiple data types while interpreting the content shared on OSNs. They highlight the unique ways in which the data and the extracted patterns from text-based platforms or visual-based platforms complement and contrast in terms of their content. The proposed research demonstrated that, in many ways, the results obtained by focusing on either only text or only visual elements of content shared online could lead to biased insights. On the other hand, it also shows the power of a sequential set of patterns that have some sort of precedence relationships and collaboration between humans and automated planners.
Dissertation/Thesis
Doctoral Dissertation Computer Science 2019
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography