To see the other types of publications on this topic, follow the link: Slosson Intelligence Test – Validity.

Dissertations / Theses on the topic 'Slosson Intelligence Test – Validity'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 17 dissertations / theses for your research on the topic 'Slosson Intelligence Test – Validity.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hernandez, Colleen H. (Colleen Head). "Comparability of WPPSI-R and Slosson Tests as a Function of the Child's Ethnicity." Thesis, University of North Texas, 1989. https://digital.library.unt.edu/ark:/67531/metadc501229/.

Full text
Abstract:
The purpose of this study was two-fold. First, this study compared the performance of children on the WPPSI-R with their performance on the Slosson Intelligence Test. Secondly, this study explored the comparability of minority and non-minority students' scores on the WPPSI-R. Seventy five children between 3 and 7 years of age were administered the WPPSI-R and Slosson. Of this sample, 25 children were White, 25 children were Black, and 25 children were Mexican American. Low, but significant correlations were found between WPPSI-R and Slosson scores. The Vocabulary subscale of the WPPSI-R correlated highest, while the Geometric Design subscale correlated the lowest with the Slosson test scores. Further analyses indicated that White children obtained significantly higher scores on the WPPSI-R than both Black and Mexican American children.
APA, Harvard, Vancouver, ISO, and other styles
2

Gard, Barbara Kathleen. "Analysis of item characteristics of the Slosson Intelligence Test for British Columbia school children." Thesis, University of British Columbia, 1986. http://hdl.handle.net/2429/26474.

Full text
Abstract:
This study investigated item characteristics which may affect the validity of the Slosson Intelligence Test (SIT) when used with school children in British Columbia. The SIT was developed as a quick, easily administered individual measure of intelligence to correlate highly with the Stanford-Binet Intelligence Scale as an anchor test. Use of the SIT has become widespread, but little technical information is available to support this. To examine the internal psychometric properties of the SIT for British Columbia schoolchildren, SIT responses were collected from 319 children (163 males, 156 females) in three age groups (7 1/2, 9 1/2, and 11 1/2 years). These data were subjected to a variety of item analysis procedures. Indices were produced for: item difficulty, item discrimination (item-total test score correlations), rank correlation between empirically determined item difficulties and item order given in the test, test homgeneity, and item-pair homogeneity. Results of the item analyses suggest that the SIT does not function appropriately when used with British Columbia school children. Two-thirds of the item difficulty indices were found to be outside the desired range: one-third of the items did not discriminate effectively; and many items are not in correct order of difficulty in administration of the SIT. The thesis discusses effects of these findings on the test's internal consistency, criterion validity, and technical utilization. Factors which may underlie the shift in item difficulties are also discussed.
Education, Faculty of
Educational and Counselling Psychology, and Special Education (ECPS), Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
3

Church, Rex W. "An investigation of the value of the Peabody picture vocabulary test-revised and the Slosson intelligence test as screening instruments for the fourth edition of the Stanford-Binet intelligence scale." Virtual Press, 1986. http://liblink.bsu.edu/uhtbin/catkey/467365.

Full text
Abstract:
The Peabody Picture Vocabulary Test-Revised (PPVT-R) and Slosson Intelligence Test (SIT) were designed, at least in part, to provide a quick estimate of scores which might be obtained on the Stanford-Binet Intelligence Scale, Form L -M, without requiring extensive technical training by the examiner. Both the PPVT-R and SIT are frequently used as screening instruments to identify children for possible placement in special education programs, remedial reading groups, speech and language therapy, gifted programs, or "tracks." This study investigated the value of the PPVT-R and SIT as screening instruments for the Fourth Edition Stanford-Binet.Fifty students, grades kindergarten through fifth, were randomly selected to participate in the study. All subjects were involved in regular education at least part-time. Subjects were administered the PPVT R, SIT, and Fourth Edition Binet by a single licensed school psychologist. The administration order of the instruments was randomized. Participants were tested on consecutive school days (10) until all subjects had been administered the three instruments.Correlation coefficients were determined for the Standard Score of the PPVT-R and each Standard Age Score of the Binet (four area scores and one total test score), as well as for the SIT IQ score and each Standard Age Score of the Binet. All correlations were positive and significant beyond the p<.Ol level except between the PPVT-R and Binet Quantitative Reasoning.Analyses of Variance were used to determine mean differences of scores obtained on the three instruments. Significant differences (p<.05) were found between scores on the PPVT-R and Abstract/Visual Reasoning, SIT and Verbal Reasoning, SIT and Short-Term Memory, SIT and Abstract/Visual Reasoning, and SIT and Total Test Composite.Results indicated that, in general, the SIT is a better predictor of Fourth Edition Binet scores than the PPVT R, however frequently yielded significantly different scores. It was concluded that neither the PPVT R nor SIT should be used as a substitute for more comprehensive measures of intellectual functioning, and caution should be used when interpreting their results. Much more research is needed to clarify the diagnostic value of the Fourth Edition Stanford-Binet as a psychometric instrument.
APA, Harvard, Vancouver, ISO, and other styles
4

Parmar, Rene S. (Rene Sumangala). "Cross-Cultural Validity of the Test of Non-Verbal Intelligence." Thesis, University of North Texas, 1988. https://digital.library.unt.edu/ark:/67531/metadc332395/.

Full text
Abstract:
The purpose of this study was to investigate the extent to which a non-verbal test of intelligence, the Test of Non-Verbal Intelligence (TONI), may be used for assessing intellectual abilities of children in India. This investigation is considered important since current instruments used in India were developed several years ago and do not adequately reflect present standards of performance. Further, current instruments do not demonstrate adequate validity, as procedures for development and cultural transport were frequently not in adherence to recommended guidelines for such practice. Data were collected from 91 normally achieving and 18 mentally retarded Indian children, currently enrolled in elementary schools. Data from an American comparison group were procured from the authors of the TONI. Subjects were matched on age, grade, and area of residence. Subjects were also from comparative socioeconomic backgrounds. Literature review of the theoretical framework supporting cross-cultural measurement of intellectual ability, a summary of major instruments developed for cross-cultural use, non-verbal measures of intellectual ability in India, and issues in cross-cultural research are discussed, with recommended methodology for test transport. Major findings are: (a) the factor scales derived from the Indian and American normally achieving groups indicate significant differences; (b) items 1, 3, 5, 8, 10, and 22 are biased against the Indian group, though overall item characteristic curves are not significantly different; (c) mean raw scores on the TONI are significantly different between second and third grade Indian subjects; and (d) mean TONI Quotients are significantly different between normally achieving and mentally retarded Indian subjects. It is evident that deletion of biased items and rescaling would be necessary for the TONI to be valid in the Indian context. However, because it does discriminate between subjects at different levels of ability, adaptation for use in India is justified. It may prove to be a more current and parsimonious method of assessing intellectual abilities in Indian children than instruments presently in use.
APA, Harvard, Vancouver, ISO, and other styles
5

Richardson, Erin. "Reliability and Validity of the Universal Nonverbal Intelligence Test for Children with Hearing Impairments." TopSCHOLAR®, 1995. http://digitalcommons.wku.edu/theses/921.

Full text
Abstract:
This researcher investigated the reliability and validity of the Universal Nonverbal Intelligence Test (UNIT) for a hearing-impaired population. The subjects consisted of 15 hearing-impaired children between the ages of five and eight who are are enrolled in special education programs for the hearing-impaired. Three week test-retest reliability coefficients were moderate to high for all subtests (.65 to .89) and high for all scales and the total score (.88 to .96). Intracorrelations support the structure of the UNIT in that subtests demonstrated high correlations with the scale they were purported to represent. Concurrent validity was assessed with the Naglieri Draw-A-Person (DAP) during the first testing session. The UNIT and the DAP demonstrated correlations within the moderate to high range (.60 to .77) between the scales and total score of the UNIT and the three drawings and the total of the DAP. Results are discussed relevant to other measures utilized with hearing-impaired populations. The most important implication is that the UNIT appears to be a promising instrument for assessing intellectual abilities in children with hearing-impairments.
APA, Harvard, Vancouver, ISO, and other styles
6

Morgan, Kimberly E. "The validity of intelligence tests using the Cattell-Horn-Carroll model of intelligence with a preschool population." Virtual Press, 2008. http://liblink.bsu.edu/uhtbin/catkey/1389688.

Full text
Abstract:
Individual differences in human intellectual abilities and the measurement of those differences have been of great interest to the field of school psychology. As such, different theoretical perspectives and corresponding test batteries have evolved over the years as a way to explain and measure these abilities. A growing interest in the field of school psychology has been to use more than one intelligence test in a "cross-battery" assessment in hopes of measuring a wider range (or a more in-depth but selective range) of cognitive abilities. Additionally, interest in assessing intelligence began to focus on preschool-aged children because of initiatives to intervene early with at-risk children. The purpose of this study was to examine the Stanford-Binet Intelligence Scales, Fifth Edition (SB-V) and Kaufman Assessment Battery for Children, Second Edition (KABC-II) in relation to the Cattell-Horn-Carroll (CHC) theory of intelligence using a population of 200 preschool children. Confirmatory factor analyses (CFAs) were conducted with these two tests individually as well as in conjunction with one another. Different variations of the CHC model were examined to determine which provided the best representation of the underlying CHC constructs measured by these tests. Results of the CFAs with the SBV revealed that it was best interpreted from a two-stratum model, although results with the KABC-II indicated that the three-stratum CHC model was the best overall design. Finally, results from the joint CFA did not provide support for a cross-battery assessment with these two particular tests.3
Department of Educational Psychology
APA, Harvard, Vancouver, ISO, and other styles
7

Gambrell, James Lamar. "Effects of age and schooling on 22 ability and achievement tests." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/2498.

Full text
Abstract:
Although much educational research has investigated the relative effectiveness of different educational interventions and policies, little is known about the absolute net benefits of K-12 schooling independent of growth due to chronological age and out-of-school experience. The nearly universal policy of age tracking in schools makes this a difficult topic to investigate. However, a quasi-experimental regression discontinuity design can be used to separate observed test score differences between grades into independent age and schooling components, yielding an estimate of the net effects of school exposure at each grade level. In this study, a multilevel version of this design was applied to scores on 22 common ability and achievement tests from two major standardized test batteries. The ability battery contained 9 measures of Verbal, Quantitative, and Figural reasoning. The achievement battery contained 13 measures in the areas of Language, Mathematics, Reading, Social Studies, Science, and Sources of Information. The analysis was based on sample of over 20,000 students selected from a longitudinal database collected by a large U.S. parochial school system. The theory of fluid (Gf) and crystallized (Gc) intelligence predicts that these tests will show systematically different levels of sensitivity to schooling. Indeed, the achievement (Gc) tests were found to be three times more sensitive to schooling than they were to aging (one-year effect sizes of .41 versus .15), whereas the ability (Gf) tests were equally influenced by age (.18) and schooling (.19). Nonetheless, the schooling effect on most Gf tests was substantial, especially when the compounding over a typical school career is considered. This replicates the results of previous investigations of age and schooling using regression discontinuity methods and once again contradicts common interpretations of fluid ability. Different measures of a construct often exhibited varying levels of school sensitivity. Those tests that were less sensitive to schooling generally required reading, reasoning, transfer, synthesis, or translation; posed a wider range of questions; and/or presented problems in an unfamiliar format. Quantitative reasoning tests showed more sensitivity to schooling than figural reasoning tests, while verbal reasoning tests occupied a middle ground between the two. Schooling had the most impact on basic arithmetic skills and mathematical concepts, and a significantly weaker impact on the solution of math word problems. School-related gains on isolated language skills were much larger than gains on solving grammar problems in context. The weakest schooling impact overall was on reading comprehension where effects were no larger than those on verbal ability measures. An interesting dichotomy was found between spelling and paper folding (a measure of figural and spatial reasoning). Spelling skills showed robust schooling effects but a consistently negative age slope, a puzzling result which indicates that younger students in each group outperformed older students. Paper folding showed the opposite pattern, a large age effect and a small but consistently negative schooling effect. Results serve to rebut skepticism about both the impact of schooling on test scores and the validity of distinctions between ability and achievement. It is argued that the regression discontinuity design has great potential in the measurement of school effectiveness, while also offering a source of validity evidence for test developers and test users. Implications for theories of cognitive ability and future research on schooling effects are discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

MacCann, Carolyn Elizabeth. "New approaches to measuring emotional intelligence." University of Sydney, 2006. http://hdl.handle.net/2123/934.

Full text
Abstract:
Doctor of Philosophy (PhD)
New scoring and test construction methods for emotional intelligence (EI) are suggested as alternatives for current practice, where most tests are scored by group judgment and are in ratings-based format. Both the ratings-based format and the proportion-based scores resulting from group judgments may act as method effects, obscuring relationships between EI tests, and between EI and intelligence. In addition, scoring based on standards rather than group judgments add clarity to the meaning of test scores. For these reasons, two new measures of emotional intelligence (EI) are constructed: (1) the Situational Test of Emotional Understanding (STEU); and (2) the Situational Test of Emotion Management (STEM). Following test construction, validity evidence is collected from four multi-variate studies. The STEU’s items and a standards-based scoring system are developed according to empirically derived appraisal theory concerning the structure of emotion [Roseman, 2001]. The STEM is developed as a Situational Judgment Test (SJT) with situations representing sadness, fear and anger in work life and personal life settings. Two qualitative studies form the basis for the STEM’s item development: (1) content analysis of responses to semi-structured interviews with 31 psychology undergraduates and 19 community volunteers; and (2) content analysis of free responses to targeted vignettes created from these semi-structured interviews (N = 99). The STEM may be scored according to two expert panels of emotions researchers, psychologists, therapists and life coaches (N = 12 and N = 6). In the first multi-variate study (N = 207 psychology undergraduates), both STEU and STEM scores relate strongly to vocabulary test scores and moderately to Agreeableness but no other dimension from the five-factor model of personality. STEU scores predict psychology grade and an emotionally-oriented thinking style after controlling vocabulary and personality test scores (ΔR2 = .08 and .06 respectively). STEM scores did not predict academic achievement but did predict emotionally-oriented thinking and life satisfaction (ΔR2 = .07 and .05 for emotionally-oriented thinking and .04 for life satisfaction). In the second multi-variate study, STEU scores predict lower levels of state anxiety, and STEM scores predict lower levels of state anxiety, depression, and stress among 149 community volunteers from Sydney, Australia. In the third multi-variate study (N = 181 psychology undergraduates), Strategic EI, fluid intelligence (Gf) and crystallized intelligence (Gc) were each measured with three indicators, allowing these constructs to be assessed at the latent variable level. Nested structural equation models show that Strategic EI and Gc form separate latent factors (Δχ2(1) = 12.44, p < .001). However, these factors relate very strongly (r = .73), indicating that Strategic EI may be a primary mental ability underlying Gc. In this study, STEM scores relate to emotionally-oriented thinking but not loneliness, life satisfaction or state stress, and STEU scores do not relate to any of these. STEM scores are significantly and meaningfully higher for females (d = .80), irrespective of gender differences in verbal ability or personality, or whether expert scores are derived from male or female experts. The fourth multi-variate study (N = 118 psychology undergraduates) distinguishes an EI latent factor (indicated by scores on the STEU, STEM and two emotion recognition ability measures) from a general cognitive ability factor (indicated by three intelligence measures; Δχ2(1) = 10.49, p < .001), although again cognitive ability and EI factors were strongly related (r = .66). Again, STEM scores were significantly higher for females (d = .44) and both STEU and STEM relate to Agreeableness but not to any other dimension from the five-factor model of personality. Taken together, results suggest that: (1) STEU and STEM scores are reasonably reliable and valid tests of EI; (2) EI tests assess slightly different constructs to existing measures of Gc, but more likely form a new primary mental ability within Gc than an entirely separate construct; and (3) the female superiority for EI tests may prove useful for addressing adverse impact in applied settings (e.g., selection for employment, promotion or educational opportunities), particularly given that many current assessment tools result in a male advantage.
APA, Harvard, Vancouver, ISO, and other styles
9

Powers, Abigail Dormire. "The fourth edition of the Stanford-Binet intelligence scale and the Woodcock-Johnson tests of achievement : a criterion validity study." Virtual Press, 1988. http://liblink.bsu.edu/uhtbin/catkey/558350.

Full text
Abstract:
The purpose of the study was to investigate the validity of the Stanford-Binet Intelligence Scale: Fourth Edition (SB:FE) area and composite scores and Sattler's SB:FE factor scores as predictors of school performance on the Woodcock-Johnson Tests of Achievement (WJTA).The subjects were 80 Caucasian third grade students enrolled in regular education in a rural and small town school district in northeastern Indiana. The SB:FE and WJTA were administered to all students.Two canonical analyses were conducted to test the overall relationships between sets of SB:FE predictor variables and the set of WJTA criterion variables. Results indicated that the SB:FE area scores and Sattler's SB:FE factor scores were valid predictors of academic achievement at a general level.To clarify the results of the canonical analyses, series of multiple regression analyses were conducted. Results of multiple regression with SB:FE area and composite scores indicated that the best single predictor of all WJTA scores was the SB:FE Test Composite Score. No other SB:FE variable provided a significant contribution to the regression equation for reading, math, and written language achievement over that offered by the Test Composite Score.Multiple regression analyses were also employed with Sattler's SB:FE factor scores and the WJTA scores. The optimal predictor composite for reading included the Verbal Comprehension and Memory factor scores. To predict math, the best predictor composite consisted of the Nonverbal Reasoning/Visualization and Verbal Comprehension factor scores. The optimal predictor composite for written language included the Nonverbal Reasoning/Visualization and Memory factor scores.Results of the regression analyses indicated that, without exception, the predictor composites composed of the SB:FE area and composite scores were superior in their prediction of school performance to the predictor composites developed from Sattler's SB:FE factor scores.The regression equation containing the SB:FE Test Composite Score alone was determined to be the preferred approach for predicting WJTA scores. Use of the Test Composite Score sacrifices only a minimal degree of accuracy in the prediction of achievement and requires no additional effort to compute.
Department of Educational Psychology
APA, Harvard, Vancouver, ISO, and other styles
10

Meis, Shalena R. "Incremental validity of WISC-IV factor scores in predicting academic achievement on the WIAT-II /." View online, 2009. http://repository.eiu.edu/theses/docs/32211131559271.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Chacon, Vanessa. "The Effect of the Cut Off Rules of the Bateria Woodcock-Munoz Pruebas de Habilidad Cognitiva-Revisada on the Identification and Placement of Monolingual and Bilingual Spanish Speaking Students in Special Education: A Cross-cultural Study." Diss., The University of Arizona, 2007. http://hdl.handle.net/10150/195431.

Full text
Abstract:
This study was designed to investigate if the Batería Woodcock-Muñoz: Pruebas de Habilidad Cognitiva- Revisada is a valid cross-cultural tool to measure the cognitive ability students of three Spanish-speaking groups from two different Spanish-speaking countries. One group is represented by culturally diverse bilingual Spanish dominant students in Tucson, Arizona since there is an overrepresentation of bilingual students receiving special education services in all school districts in this area. The second group consists of monolingual Spanish-speakers from Costa Rica referred for special education. The third group constitutes monolingual Spanish speakers from Costa Rica performing at grade level.This research analyzed whether or not Memory for Sentences, a sub-test of Short Term Memory, Visual Integration and Picture Recognition sub-tests of Visual Processing in the Psycho-educational Batería Woodcock-Muñoz, is more difficult for the special education Spanish/bilingual population in Tucson than for the monolingual Spanish-speaking special education and grade level individuals in Costa Rica. Item p-value differences in each subtest were estimated and compared for all items for each subtest to detect if a major item difficulty order difference existed between Spanish-speaking groups that could be indicative of internal criteria of test bias. Results show that the item order of difficulty affects the tests' established cut off rules for both Costa Rican populations in the Memory for Sentences test, making it invalid for these populations; and that the Tucson sample group's performance is lower than that of both Costa Rican groups. In addition, both Visual Processing subtests are invalid for all groups compared since the item order of difficulty does not match the test item order, thus affecting the enforcement of the cut off rules and making these subtests invalid for these populations.Standardized assessments and intelligence trait are considered the results of mathematical and statistical expressions built on test developers' own cultural views and minds. They follow along the lines of the traditional reductionist assessment or scientific/medical models. As a result, it is concluded that bilingual populations will be at disadvantage because standardized assessment neither links assessment to familiar language, cultural relevant information, and experiences nor considers how the bilingual mind processes information.
APA, Harvard, Vancouver, ISO, and other styles
12

Villarreal, Carlo Arlan. "An analysis of the reliability and validity of the Naglieri Nonverbal Ability Test (NNAT) with English language Learner (ELL) Mexican American children." Texas A&M University, 2003. http://hdl.handle.net/1969.1/3850.

Full text
Abstract:
The purpose of this study was to investigate the reliability and validity of the results of the Naglieri Nonverbal Ability Test (NNAT; Naglieri, 1997a) with a sample of English Language Learner (ELL) Mexican American children and to compare the performance on the NNAT of 122 ELL Mexican American children with children from the standardization sample. The rationale for conducting this study was the need to identify culturally sensitive and technically adequate nonverbal measures of ability for the fastest growing minority group within America’s public schools today, Mexican American children. The NNAT was administered to participants with parental consent. Statistical analyses of the scores did yield positive evidence of internal consistency for the Nonverbal Ability Index (NAI) total score of the NNAT. However, when individual clusters were analyzed, Pattern Completion, Reasoning by Analogy, and Serial Reasoning did not yield positive evidence of internal consistency. Only Spatial Visualization approached the reliability standard deemed acceptable for tests of cognitive ability. The mean differences of the NNAT scores between two independent groups were also assessed in the present study. Results of the statistical analyses did not yield statistically significant differences across age and grade factors between the scores of the ELL Mexican American sample and the standardization sample. Finally, the proposed factor structure of the NNAT was compared with the factor structure found with the ELL Mexican American sample. Goodness-of-fit test statistics indicate that the proposed four-factor structure does not fit well with the data obtained from this sample of ELL Mexican American students. Furthermore, although the NNAT is considered to be a unidimensional test of general ability, nine factors were extracted upon analysis, providing evidence that the items on each of the four clusters do not function together as four distinct dimensions with this ELL Mexican American sample. Given that the individual clusters that collectively combine to yield the NAI total score are not based on any particular model of intelligence, interpretation of specific strengths and weaknesses should be discouraged. Finally, the NNAT’s overall score should be interpreted with caution and may best be used in conjunction with multidimensional ability and/or intelligence measures.
APA, Harvard, Vancouver, ISO, and other styles
13

Bliss, Stacy L. "Concurrent and Predictive Validity of the Universal Nonverbal Intelligence Test-Group Ability Test." 2008. http://trace.tennessee.edu/utk_graddiss/412.

Full text
Abstract:
In order to determine the concurrent and predictive validity of the Universal Nonverbal Intelligence Test- Group Ability Test (UNIT-GAT; McCallum & Bracken, in press), the UNIT-GAT and the Naglieri Nonverbal Ability Test (NNAT; Naglieri, 1997a) were administered in counter-balanced order to 93 students. In addition, 40 students were rated on the Universal Nonverbal Intelligence – Gifted Screening Scales (UNIT-GSS; McCallum & Bracken, in press). The correlation coefficient of r = .36 between the UNITGAT total raw score and the NNAT was statistically significant at the p < .01 level. The UNIT-GAT scale score correlations with the NNAT total ranged from r = .18 for the Symbolic Scale to r= .53 (p< .01) for the Nonsymbolic Scale. The UNIT-GAT total raw score correlations with the UNIT-GSS composite and scales ranged from r = -.06 between both the Emotional and Science scales to r = .19 on the Creative Scale. None of the correlations were statistically significant. The correlations between the scales of the UNIT-GAT and composites of the UNIT-GSS ranged between r= -.05 (UNIT-GAT Memory Scale and UNIT-GSS General Aptitudes Composite) to r = .20 (UNIT-GAT Reasoning Scale and UNIT-GSS General Aptitudes Composite). Correlations between the scales of the UNIT-GAT and the scales of the UNIT-GSS ranged from r = -.30 between the UNIT-GAT Memory Scale and UNIT-GSS Emotional Scale to r = .25 between the UNIT-GAT Nonsymbolic Scale and UNIT-GSS Creative Scale. Stepwise multiple regression analysis did not reveal any significant utility by the UNIT-GAT total raw score or the NNAT total raw score to predict teacher-ratings on the UNIT-GSS General Aptitude and Specific Academic Aptitude Composites. Implications and future directions for research are discussed.
APA, Harvard, Vancouver, ISO, and other styles
14

Van, Staden Jakobus. "The difference between psychology and engineering students on emotional intelligence : a study into the construct validity of emotional intelligence." Diss., 2001. http://hdl.handle.net/10500/1116.

Full text
Abstract:
The criterion groups validity of emotional intelligence according to Mayer & Salovey (1997), ability model of emotional intelligence was investigated. Specifically, psychology (n+207) and engineering (n=195) students were compared on the Mayer, Salovey and Caruso Emotional Intelligence Test version 2 (MSCEIT). The primary factor structure of the MSCEIT was found to be valid with some revisions needed in terms of the reliability and content of the MSCEIT. The second-order factor structure of the MSCEIT was partially confirmed. In terms of the criterion groups validity of emotional intelligence, psychology students were found to exhibit higher levels of the ability to manage emotions in relationships, the ability to understand emotion as well as the ability to facilitate emotions. Engineering and Psychology students exhibited the same level of general emotional management and the ability to accurately identify emotion. Therefore the construct validity of emotional intelligence was partially confirmed.
Psychology
M.A. (Psychology)
APA, Harvard, Vancouver, ISO, and other styles
15

Murphy, Angela. "Defining the boundaries between trait emotional intelligence and ability emotional intelligence : an assessment of the relationship between emotional intelligence and cognitive thinking styles within the occupational environment." Thesis, 2008. http://hdl.handle.net/10500/2701.

Full text
Abstract:
Emotional intelligence has attracted a considerable amount of attention over the past few years specifically with regard to the nature of the underlying construct and the reliability and validity of the psychometric tools used to measure the construct. The present study explored the reliability and validity of a trait measure of EI in relation to an ability measure in order to determine whether the tools can be considered as measuring conceptually valid constructs within an occupational environment. The study also examined the overlap with a trait measure of cognitive thinking styles to determine the potential for separating the trait and ability EI into two unique and distinguishable constructs. Participants included 308 employees from four different workforces within a diverse South African consulting firm. The results of the study identified a number of psychometric concerns regarding the structural fidelity of the instruments as well as concerns about the cultural bias evident in both measurement instruments. Evidence for the discriminant and incremental validity of the two instruments was, however, provided and recommendations are made for the reconceptualisation of trait EI as an emotional competence and ability EI as an emotional intelligence.
Psychology
D. Litt. et Phil. (Psychology)
APA, Harvard, Vancouver, ISO, and other styles
16

Mnguni, Vusumuzi Quirion. "The predictive validity of a psychological test battery for the selection of cadet pilots in a commercial airline." Thesis, 2011. http://hdl.handle.net/10500/4869.

Full text
Abstract:
Commercial airlines need to employ well qualified pilots to run their core business. The current supply from privately and military qualified pilots is proving to be inadequate. A further challenge facing the airline is having to attempt to reflect the diversity of the country in its workforce. The present study investigated the predictive validity of a psychological test battery for cadet pilots. The predictors that were included in the research are: biographical data, ABET levels in terms of English and Matric results, as well as results from psychological tests, namely: English literacy skills assessment (ELSA), Raven’s Progressive Matrices (RMP), Blox test, subtests of the Intermediate Battery (B/77) viz: Arithmetic 1 and 2, and Reading Comprehension, and the Wechsler Adult Intelligence Scale (WAIS). The objective of the research was to determine the predictive validity of the selection battery utilizing the final flying school results as the criteria. The results of the research were inconclusive. Only some of the tests showed positive correlations with the modules of the flying school results. The Ravens Progressive Matrices, Blox, Matriculation English symbol, ABET levels and Reading Comprehension, were found to have predictive power with some of the modules of the flying school results based on the regression analysis. It is recommended that a revised profile for a commercial airline pilot should be developed, as well as that the critical skills and competencies should be identified to enable the airlines to utilize appropriate and relevant assessment tools to select prospective candidates, particularly among the previously disadvantaged communities.
Industrial and Organisational Psychology
M. Com. (Industrial and Organisational Psychology)
APA, Harvard, Vancouver, ISO, and other styles
17

Mphokane, Adelaide. "The predictive validity of learning potential and English language proficiency for work performance of candidate engineers." Diss., 2014. http://hdl.handle.net/10500/14410.

Full text
Abstract:
The aim of this research was (1) to provide empirical data of learning potential and English language proficiency for work performance; (2) to establish whether race and gender influence work performance; (3) to evaluate practical utility and to propose recommendations for selection purposes. The Learning Potential Computerised Adaptive Test and the English Literacy Skills Assessment were used as measuring instruments to measure learning potential and English language proficiency respectively. Work performance data were obtained from the normal performance data system of the company where the research was conducted. ANOVA results showed differences between race and gender groupings. A regression analysis confirmed the predictive validity of learning potential and English language proficiency on work performance. The Spearman rho correlation coefficient (p < 0.05) showed a significant positive correlation between the investigated variables
Industrial & Organisational Psychology
M. A. (Industrial & Organisational Psychology)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography