Academic literature on the topic 'Slosson Intelligence Test – Validity'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Slosson Intelligence Test – Validity.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Slosson Intelligence Test – Validity"

1

Alston, Reginald J. "A Concurrent Validity Study of the APTCOM's General Intelligence Scale: A Pilot Investigation." Journal of Applied Rehabilitation Counseling 21, no. 1 (March 1, 1990): 32–34. http://dx.doi.org/10.1891/0047-2220.21.1.32.

Full text
Abstract:
The general intelligence scale of the APT/COM computer-assisted vocational evaluation system was investigated for concurrent validity, using the Slosson Intelligence Test as the criterion. Fifteen university students with disabilities served as subjects in this pilot study. It was found that the APTICOM's intelligence scale is significantly correlated to the Slosson Intelligence Test. Implications for rehabilitation research and practice are discussed.
APA, Harvard, Vancouver, ISO, and other styles
2

Prewett, Peter N., and Diane B. Fowler. "Predictive validity of the Slosson Intelligence Test with the WISC-R and the WRAT-R level 1." Psychology in the Schools 29, no. 1 (January 1992): 17–21. http://dx.doi.org/10.1002/1520-6807(199201)29:1<17::aid-pits2310290104>3.0.co;2-e.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bell, Nancy L., Marggi Rucker, A. J. Finch, and Joanne Alexander. "Concurrent validity of the Slosson full-range intelligence test: Comparison with the Wechsler intelligence scale for children–third edition and the Woodcock Johnson tests of achievement–revised." Psychology in the Schools 39, no. 1 (2001): 31–38. http://dx.doi.org/10.1002/pits.10002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Erford, Bradley T., and Donald B. Hofler. "Technical Analysis of the Slosson Written Expression Test." Psychological Reports 94, no. 3 (June 2004): 915–25. http://dx.doi.org/10.2466/pr0.94.3.915-925.

Full text
Abstract:
The Slosson Written Expression Test was designed to assess students ages 8–17 years at risk for difficulties in written expression. Scores from three independent samples were used to evaluate the test's reliability and validity for measuring students' written expression. Test-retest reliability of the SWET subscales ranged from .80 to .94 ( n = 151), and .95 for the Written Expression Total Standard Scores. The median alternate-form reliability for students' Written Expression Total Standard Scores was .81 across the three forms. Scores on the Slosson test yielded concurrent validity coefficients ( n = 143) of .60 with scores from the Woodcock-Johnson: Tests of Achievement–Third Edition Broad Written Language Domain and .49 with scores on the Test of Written Language–Third Edition Spontaneous Writing Quotient. Exploratory factor analytic procedures suggested the Slosson test is comprised of two dimensions, Writing Mechanics and Writing Maturity (47.1% and 20.1% variance accounted for, respectively). In general, the Slosson Written Expression Test presents with sufficient technical characteristics to be considered a useful written expression screening test.
APA, Harvard, Vancouver, ISO, and other styles
5

Karnes, Frances A., James E. Whorton, Billie Bob Currie, and Steven W. Cantrall. "Correlations of Scores on the WISC—R, Stanford-Binet, the Slosson Intelligence Test, and the Developing Cognitive Abilities Test for Intellectually Gifted Youth." Psychological Reports 58, no. 3 (June 1986): 887–89. http://dx.doi.org/10.2466/pr0.1986.58.3.887.

Full text
Abstract:
For a sample of 173 intellectually gifted students, percentiles from the Developing Cognitive Abilities Test were correlated with IQs from the Wechsler Intelligence Scale for Children—Revised, Stanford-Binet, and Slosson Intelligence Test—Revised. Although the coefficients of the WISC—R and Slosson with the DCAT tended to be significant, they were too low to have practical meaning and those with Stanford-Binet IQs were nonsignificant.
APA, Harvard, Vancouver, ISO, and other styles
6

Brown, Ted, and Carolyn Unsworth. "Evaluating Construct Validity of the Slosson Visual-Motor Performance Test Using the Rasch Measurement Model." Perceptual and Motor Skills 108, no. 2 (April 2009): 367–82. http://dx.doi.org/10.2466/pms.108.2.367-382.

Full text
Abstract:
The aim of this study was to evaluate the construct validity of the Slosson Visual-Motor Performance Test by applying the Rasch Measurement Model to evaluate the test's scalability, dimensionality, differential item functioning based on sex, and hierarchical ordering. Participants were 400 children ages 5 to 12 years, recruited from six schools in Melbourne, Victoria, Australia. The Slosson Visual-Motor Performance Test requires a child to copy 14 different geometric designs three times each for a total 42 scale items. Children completed the test under the supervision of an occupational therapist. Overall, 13 of 42 of the test items exhibited poor measurement properties. As nearly one-third of the scale items were problematic, the Slosson Visual-Motor Performance Test in its current form is not recommended for clinical use.
APA, Harvard, Vancouver, ISO, and other styles
7

Williams, Thomas O., Ronald C. Eaves, Suzanne Woods-Groves, and Gina Mariano. "Stability of Scores for the Slosson Full-Range Intelligence Test." Psychological Reports 101, no. 1 (August 2007): 135–40. http://dx.doi.org/10.2466/pr0.101.1.135-140.

Full text
Abstract:
The test-retest stability of the Slosson Full-Range Intelligence Test by Algozzine, Eaves, Mann, and Vance was investigated with test scores from a sample of 103 students. With a mean interval of 13.7 mo. and different examiners for each of the two test administrations, the test-retest reliability coefficients for the Full-Range IQ, Verbal Reasoning, Abstract Reasoning, Quantitative Reasoning, and Memory were .93, .85, .80, .80, and .83, respectively. Mean differences from the test-retest scores were not statistically significantly different for any of the scales. Results suggest that Slosson scores are stable over time even when different examiners administer the test.
APA, Harvard, Vancouver, ISO, and other styles
8

Sattler, Jerome M., Dene E. Hilson, and Theron M. Covin. "Comparison of Slosson Intelligence Test—Revised Norms and Peabody Picture Vocabulary Test—Revised with Black Headstart Children." Perceptual and Motor Skills 60, no. 3 (June 1985): 705–6. http://dx.doi.org/10.2466/pms.1985.60.3.705.

Full text
Abstract:
Slosson Intelligence Test IQs (revised norms) and Peabody Picture Vocabulary Test—Revised (PPVT-R, Form L) standard scores for 100 black rural Headstart children were correlated and then compared by use of a one-way design for repeated measures. Although the correlation of .48 between the two tests was significant, Slosson IQs ( M = 100.27, SD = 14.82) were significantly higher than PPVT-R scores ( M = 74.80, SD = 14.23). These results suggest that the two instruments are not equivalent. There is a need for further research with these two instruments with black and with white children.
APA, Harvard, Vancouver, ISO, and other styles
9

Myers, Meyer, John C. Brantley, Lisbet Nielsen, Gary Cowan, and Cynthia Howard. "Software Review: Slosson Intelligence Test Computer Report (SIT-CR)." Journal of Psychoeducational Assessment 8, no. 4 (December 1990): 556–57. http://dx.doi.org/10.1177/073428299000800413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

WILLIAMS, THOMAS O. "STABILITY OF SCORES FOR THE SLOSSON FULL-RANGE INTELLIGENCE TEST." Psychological Reports 101, no. 5 (2007): 135. http://dx.doi.org/10.2466/pr0.101.5.135-140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Slosson Intelligence Test – Validity"

1

Hernandez, Colleen H. (Colleen Head). "Comparability of WPPSI-R and Slosson Tests as a Function of the Child's Ethnicity." Thesis, University of North Texas, 1989. https://digital.library.unt.edu/ark:/67531/metadc501229/.

Full text
Abstract:
The purpose of this study was two-fold. First, this study compared the performance of children on the WPPSI-R with their performance on the Slosson Intelligence Test. Secondly, this study explored the comparability of minority and non-minority students' scores on the WPPSI-R. Seventy five children between 3 and 7 years of age were administered the WPPSI-R and Slosson. Of this sample, 25 children were White, 25 children were Black, and 25 children were Mexican American. Low, but significant correlations were found between WPPSI-R and Slosson scores. The Vocabulary subscale of the WPPSI-R correlated highest, while the Geometric Design subscale correlated the lowest with the Slosson test scores. Further analyses indicated that White children obtained significantly higher scores on the WPPSI-R than both Black and Mexican American children.
APA, Harvard, Vancouver, ISO, and other styles
2

Gard, Barbara Kathleen. "Analysis of item characteristics of the Slosson Intelligence Test for British Columbia school children." Thesis, University of British Columbia, 1986. http://hdl.handle.net/2429/26474.

Full text
Abstract:
This study investigated item characteristics which may affect the validity of the Slosson Intelligence Test (SIT) when used with school children in British Columbia. The SIT was developed as a quick, easily administered individual measure of intelligence to correlate highly with the Stanford-Binet Intelligence Scale as an anchor test. Use of the SIT has become widespread, but little technical information is available to support this. To examine the internal psychometric properties of the SIT for British Columbia schoolchildren, SIT responses were collected from 319 children (163 males, 156 females) in three age groups (7 1/2, 9 1/2, and 11 1/2 years). These data were subjected to a variety of item analysis procedures. Indices were produced for: item difficulty, item discrimination (item-total test score correlations), rank correlation between empirically determined item difficulties and item order given in the test, test homgeneity, and item-pair homogeneity. Results of the item analyses suggest that the SIT does not function appropriately when used with British Columbia school children. Two-thirds of the item difficulty indices were found to be outside the desired range: one-third of the items did not discriminate effectively; and many items are not in correct order of difficulty in administration of the SIT. The thesis discusses effects of these findings on the test's internal consistency, criterion validity, and technical utilization. Factors which may underlie the shift in item difficulties are also discussed.
Education, Faculty of
Educational and Counselling Psychology, and Special Education (ECPS), Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
3

Church, Rex W. "An investigation of the value of the Peabody picture vocabulary test-revised and the Slosson intelligence test as screening instruments for the fourth edition of the Stanford-Binet intelligence scale." Virtual Press, 1986. http://liblink.bsu.edu/uhtbin/catkey/467365.

Full text
Abstract:
The Peabody Picture Vocabulary Test-Revised (PPVT-R) and Slosson Intelligence Test (SIT) were designed, at least in part, to provide a quick estimate of scores which might be obtained on the Stanford-Binet Intelligence Scale, Form L -M, without requiring extensive technical training by the examiner. Both the PPVT-R and SIT are frequently used as screening instruments to identify children for possible placement in special education programs, remedial reading groups, speech and language therapy, gifted programs, or "tracks." This study investigated the value of the PPVT-R and SIT as screening instruments for the Fourth Edition Stanford-Binet.Fifty students, grades kindergarten through fifth, were randomly selected to participate in the study. All subjects were involved in regular education at least part-time. Subjects were administered the PPVT R, SIT, and Fourth Edition Binet by a single licensed school psychologist. The administration order of the instruments was randomized. Participants were tested on consecutive school days (10) until all subjects had been administered the three instruments.Correlation coefficients were determined for the Standard Score of the PPVT-R and each Standard Age Score of the Binet (four area scores and one total test score), as well as for the SIT IQ score and each Standard Age Score of the Binet. All correlations were positive and significant beyond the p<.Ol level except between the PPVT-R and Binet Quantitative Reasoning.Analyses of Variance were used to determine mean differences of scores obtained on the three instruments. Significant differences (p<.05) were found between scores on the PPVT-R and Abstract/Visual Reasoning, SIT and Verbal Reasoning, SIT and Short-Term Memory, SIT and Abstract/Visual Reasoning, and SIT and Total Test Composite.Results indicated that, in general, the SIT is a better predictor of Fourth Edition Binet scores than the PPVT R, however frequently yielded significantly different scores. It was concluded that neither the PPVT R nor SIT should be used as a substitute for more comprehensive measures of intellectual functioning, and caution should be used when interpreting their results. Much more research is needed to clarify the diagnostic value of the Fourth Edition Stanford-Binet as a psychometric instrument.
APA, Harvard, Vancouver, ISO, and other styles
4

Parmar, Rene S. (Rene Sumangala). "Cross-Cultural Validity of the Test of Non-Verbal Intelligence." Thesis, University of North Texas, 1988. https://digital.library.unt.edu/ark:/67531/metadc332395/.

Full text
Abstract:
The purpose of this study was to investigate the extent to which a non-verbal test of intelligence, the Test of Non-Verbal Intelligence (TONI), may be used for assessing intellectual abilities of children in India. This investigation is considered important since current instruments used in India were developed several years ago and do not adequately reflect present standards of performance. Further, current instruments do not demonstrate adequate validity, as procedures for development and cultural transport were frequently not in adherence to recommended guidelines for such practice. Data were collected from 91 normally achieving and 18 mentally retarded Indian children, currently enrolled in elementary schools. Data from an American comparison group were procured from the authors of the TONI. Subjects were matched on age, grade, and area of residence. Subjects were also from comparative socioeconomic backgrounds. Literature review of the theoretical framework supporting cross-cultural measurement of intellectual ability, a summary of major instruments developed for cross-cultural use, non-verbal measures of intellectual ability in India, and issues in cross-cultural research are discussed, with recommended methodology for test transport. Major findings are: (a) the factor scales derived from the Indian and American normally achieving groups indicate significant differences; (b) items 1, 3, 5, 8, 10, and 22 are biased against the Indian group, though overall item characteristic curves are not significantly different; (c) mean raw scores on the TONI are significantly different between second and third grade Indian subjects; and (d) mean TONI Quotients are significantly different between normally achieving and mentally retarded Indian subjects. It is evident that deletion of biased items and rescaling would be necessary for the TONI to be valid in the Indian context. However, because it does discriminate between subjects at different levels of ability, adaptation for use in India is justified. It may prove to be a more current and parsimonious method of assessing intellectual abilities in Indian children than instruments presently in use.
APA, Harvard, Vancouver, ISO, and other styles
5

Richardson, Erin. "Reliability and Validity of the Universal Nonverbal Intelligence Test for Children with Hearing Impairments." TopSCHOLAR®, 1995. http://digitalcommons.wku.edu/theses/921.

Full text
Abstract:
This researcher investigated the reliability and validity of the Universal Nonverbal Intelligence Test (UNIT) for a hearing-impaired population. The subjects consisted of 15 hearing-impaired children between the ages of five and eight who are are enrolled in special education programs for the hearing-impaired. Three week test-retest reliability coefficients were moderate to high for all subtests (.65 to .89) and high for all scales and the total score (.88 to .96). Intracorrelations support the structure of the UNIT in that subtests demonstrated high correlations with the scale they were purported to represent. Concurrent validity was assessed with the Naglieri Draw-A-Person (DAP) during the first testing session. The UNIT and the DAP demonstrated correlations within the moderate to high range (.60 to .77) between the scales and total score of the UNIT and the three drawings and the total of the DAP. Results are discussed relevant to other measures utilized with hearing-impaired populations. The most important implication is that the UNIT appears to be a promising instrument for assessing intellectual abilities in children with hearing-impairments.
APA, Harvard, Vancouver, ISO, and other styles
6

Morgan, Kimberly E. "The validity of intelligence tests using the Cattell-Horn-Carroll model of intelligence with a preschool population." Virtual Press, 2008. http://liblink.bsu.edu/uhtbin/catkey/1389688.

Full text
Abstract:
Individual differences in human intellectual abilities and the measurement of those differences have been of great interest to the field of school psychology. As such, different theoretical perspectives and corresponding test batteries have evolved over the years as a way to explain and measure these abilities. A growing interest in the field of school psychology has been to use more than one intelligence test in a "cross-battery" assessment in hopes of measuring a wider range (or a more in-depth but selective range) of cognitive abilities. Additionally, interest in assessing intelligence began to focus on preschool-aged children because of initiatives to intervene early with at-risk children. The purpose of this study was to examine the Stanford-Binet Intelligence Scales, Fifth Edition (SB-V) and Kaufman Assessment Battery for Children, Second Edition (KABC-II) in relation to the Cattell-Horn-Carroll (CHC) theory of intelligence using a population of 200 preschool children. Confirmatory factor analyses (CFAs) were conducted with these two tests individually as well as in conjunction with one another. Different variations of the CHC model were examined to determine which provided the best representation of the underlying CHC constructs measured by these tests. Results of the CFAs with the SBV revealed that it was best interpreted from a two-stratum model, although results with the KABC-II indicated that the three-stratum CHC model was the best overall design. Finally, results from the joint CFA did not provide support for a cross-battery assessment with these two particular tests.3
Department of Educational Psychology
APA, Harvard, Vancouver, ISO, and other styles
7

Gambrell, James Lamar. "Effects of age and schooling on 22 ability and achievement tests." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/2498.

Full text
Abstract:
Although much educational research has investigated the relative effectiveness of different educational interventions and policies, little is known about the absolute net benefits of K-12 schooling independent of growth due to chronological age and out-of-school experience. The nearly universal policy of age tracking in schools makes this a difficult topic to investigate. However, a quasi-experimental regression discontinuity design can be used to separate observed test score differences between grades into independent age and schooling components, yielding an estimate of the net effects of school exposure at each grade level. In this study, a multilevel version of this design was applied to scores on 22 common ability and achievement tests from two major standardized test batteries. The ability battery contained 9 measures of Verbal, Quantitative, and Figural reasoning. The achievement battery contained 13 measures in the areas of Language, Mathematics, Reading, Social Studies, Science, and Sources of Information. The analysis was based on sample of over 20,000 students selected from a longitudinal database collected by a large U.S. parochial school system. The theory of fluid (Gf) and crystallized (Gc) intelligence predicts that these tests will show systematically different levels of sensitivity to schooling. Indeed, the achievement (Gc) tests were found to be three times more sensitive to schooling than they were to aging (one-year effect sizes of .41 versus .15), whereas the ability (Gf) tests were equally influenced by age (.18) and schooling (.19). Nonetheless, the schooling effect on most Gf tests was substantial, especially when the compounding over a typical school career is considered. This replicates the results of previous investigations of age and schooling using regression discontinuity methods and once again contradicts common interpretations of fluid ability. Different measures of a construct often exhibited varying levels of school sensitivity. Those tests that were less sensitive to schooling generally required reading, reasoning, transfer, synthesis, or translation; posed a wider range of questions; and/or presented problems in an unfamiliar format. Quantitative reasoning tests showed more sensitivity to schooling than figural reasoning tests, while verbal reasoning tests occupied a middle ground between the two. Schooling had the most impact on basic arithmetic skills and mathematical concepts, and a significantly weaker impact on the solution of math word problems. School-related gains on isolated language skills were much larger than gains on solving grammar problems in context. The weakest schooling impact overall was on reading comprehension where effects were no larger than those on verbal ability measures. An interesting dichotomy was found between spelling and paper folding (a measure of figural and spatial reasoning). Spelling skills showed robust schooling effects but a consistently negative age slope, a puzzling result which indicates that younger students in each group outperformed older students. Paper folding showed the opposite pattern, a large age effect and a small but consistently negative schooling effect. Results serve to rebut skepticism about both the impact of schooling on test scores and the validity of distinctions between ability and achievement. It is argued that the regression discontinuity design has great potential in the measurement of school effectiveness, while also offering a source of validity evidence for test developers and test users. Implications for theories of cognitive ability and future research on schooling effects are discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

MacCann, Carolyn Elizabeth. "New approaches to measuring emotional intelligence." University of Sydney, 2006. http://hdl.handle.net/2123/934.

Full text
Abstract:
Doctor of Philosophy (PhD)
New scoring and test construction methods for emotional intelligence (EI) are suggested as alternatives for current practice, where most tests are scored by group judgment and are in ratings-based format. Both the ratings-based format and the proportion-based scores resulting from group judgments may act as method effects, obscuring relationships between EI tests, and between EI and intelligence. In addition, scoring based on standards rather than group judgments add clarity to the meaning of test scores. For these reasons, two new measures of emotional intelligence (EI) are constructed: (1) the Situational Test of Emotional Understanding (STEU); and (2) the Situational Test of Emotion Management (STEM). Following test construction, validity evidence is collected from four multi-variate studies. The STEU’s items and a standards-based scoring system are developed according to empirically derived appraisal theory concerning the structure of emotion [Roseman, 2001]. The STEM is developed as a Situational Judgment Test (SJT) with situations representing sadness, fear and anger in work life and personal life settings. Two qualitative studies form the basis for the STEM’s item development: (1) content analysis of responses to semi-structured interviews with 31 psychology undergraduates and 19 community volunteers; and (2) content analysis of free responses to targeted vignettes created from these semi-structured interviews (N = 99). The STEM may be scored according to two expert panels of emotions researchers, psychologists, therapists and life coaches (N = 12 and N = 6). In the first multi-variate study (N = 207 psychology undergraduates), both STEU and STEM scores relate strongly to vocabulary test scores and moderately to Agreeableness but no other dimension from the five-factor model of personality. STEU scores predict psychology grade and an emotionally-oriented thinking style after controlling vocabulary and personality test scores (ΔR2 = .08 and .06 respectively). STEM scores did not predict academic achievement but did predict emotionally-oriented thinking and life satisfaction (ΔR2 = .07 and .05 for emotionally-oriented thinking and .04 for life satisfaction). In the second multi-variate study, STEU scores predict lower levels of state anxiety, and STEM scores predict lower levels of state anxiety, depression, and stress among 149 community volunteers from Sydney, Australia. In the third multi-variate study (N = 181 psychology undergraduates), Strategic EI, fluid intelligence (Gf) and crystallized intelligence (Gc) were each measured with three indicators, allowing these constructs to be assessed at the latent variable level. Nested structural equation models show that Strategic EI and Gc form separate latent factors (Δχ2(1) = 12.44, p < .001). However, these factors relate very strongly (r = .73), indicating that Strategic EI may be a primary mental ability underlying Gc. In this study, STEM scores relate to emotionally-oriented thinking but not loneliness, life satisfaction or state stress, and STEU scores do not relate to any of these. STEM scores are significantly and meaningfully higher for females (d = .80), irrespective of gender differences in verbal ability or personality, or whether expert scores are derived from male or female experts. The fourth multi-variate study (N = 118 psychology undergraduates) distinguishes an EI latent factor (indicated by scores on the STEU, STEM and two emotion recognition ability measures) from a general cognitive ability factor (indicated by three intelligence measures; Δχ2(1) = 10.49, p < .001), although again cognitive ability and EI factors were strongly related (r = .66). Again, STEM scores were significantly higher for females (d = .44) and both STEU and STEM relate to Agreeableness but not to any other dimension from the five-factor model of personality. Taken together, results suggest that: (1) STEU and STEM scores are reasonably reliable and valid tests of EI; (2) EI tests assess slightly different constructs to existing measures of Gc, but more likely form a new primary mental ability within Gc than an entirely separate construct; and (3) the female superiority for EI tests may prove useful for addressing adverse impact in applied settings (e.g., selection for employment, promotion or educational opportunities), particularly given that many current assessment tools result in a male advantage.
APA, Harvard, Vancouver, ISO, and other styles
9

Powers, Abigail Dormire. "The fourth edition of the Stanford-Binet intelligence scale and the Woodcock-Johnson tests of achievement : a criterion validity study." Virtual Press, 1988. http://liblink.bsu.edu/uhtbin/catkey/558350.

Full text
Abstract:
The purpose of the study was to investigate the validity of the Stanford-Binet Intelligence Scale: Fourth Edition (SB:FE) area and composite scores and Sattler's SB:FE factor scores as predictors of school performance on the Woodcock-Johnson Tests of Achievement (WJTA).The subjects were 80 Caucasian third grade students enrolled in regular education in a rural and small town school district in northeastern Indiana. The SB:FE and WJTA were administered to all students.Two canonical analyses were conducted to test the overall relationships between sets of SB:FE predictor variables and the set of WJTA criterion variables. Results indicated that the SB:FE area scores and Sattler's SB:FE factor scores were valid predictors of academic achievement at a general level.To clarify the results of the canonical analyses, series of multiple regression analyses were conducted. Results of multiple regression with SB:FE area and composite scores indicated that the best single predictor of all WJTA scores was the SB:FE Test Composite Score. No other SB:FE variable provided a significant contribution to the regression equation for reading, math, and written language achievement over that offered by the Test Composite Score.Multiple regression analyses were also employed with Sattler's SB:FE factor scores and the WJTA scores. The optimal predictor composite for reading included the Verbal Comprehension and Memory factor scores. To predict math, the best predictor composite consisted of the Nonverbal Reasoning/Visualization and Verbal Comprehension factor scores. The optimal predictor composite for written language included the Nonverbal Reasoning/Visualization and Memory factor scores.Results of the regression analyses indicated that, without exception, the predictor composites composed of the SB:FE area and composite scores were superior in their prediction of school performance to the predictor composites developed from Sattler's SB:FE factor scores.The regression equation containing the SB:FE Test Composite Score alone was determined to be the preferred approach for predicting WJTA scores. Use of the Test Composite Score sacrifices only a minimal degree of accuracy in the prediction of achievement and requires no additional effort to compute.
Department of Educational Psychology
APA, Harvard, Vancouver, ISO, and other styles
10

Meis, Shalena R. "Incremental validity of WISC-IV factor scores in predicting academic achievement on the WIAT-II /." View online, 2009. http://repository.eiu.edu/theses/docs/32211131559271.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Slosson Intelligence Test – Validity"

1

Slosson, Richard L. Slosson Intelligence Test (SIT-R) for children and adults. East Aurora, N.Y: Slosson Educational Publications, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Slosson, Richard L. Slosson Intelligence Test (SIT-R) for children and adults. East Aurora, N.Y: Slosson Educational Publications, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jensen, John A. Slosson Intelligence Test (SIT) for children and adults: Expanded norms tables application and development. East Aurora, N.Y: Slosson Educational Publications, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Slosson, Richard L. Slosson Intelligence Test (Sort). Pro ed, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Slosson Intelligence Test for Children and Adults Sit/Sit-R1 (2 Books and Test Sheets). Slosson Educational Pubns, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

The Stanford-Binet intelligence scale: A validity study. 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tomsic. Comparative Analysis of the Slosson Intelligence Test Using Old and New Norms for Gifted Selection. Univ of Oregon Pr, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Slosson Intelligence Test – Validity"

1

"GENECES: A Rationale for the Construct Validation of Theories and Tests of Intelligence." In Test Validity, 81–96. Routledge, 2013. http://dx.doi.org/10.4324/9780203056905-14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

"Predictive Validity Wechsler Intelligence Scale for Children." In Comprehending Test Manuals, 24–26. Routledge, 2016. http://dx.doi.org/10.4324/9781315266695-11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

"Test-Taking Style, Personality Traits, and Psychometric Validity." In Intelligence and Personality, 301–13. Psychology Press, 2012. http://dx.doi.org/10.4324/9781410604415-43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Skaik, Huda Alami, and Roslina Othman. "Determinants of Knowledge Sharing Behaviour among Academics in United Arab Emirates." In Business Intelligence, 1402–18. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9562-7.ch068.

Full text
Abstract:
The main objectives of this research are to (i) investigate the practice of knowledge sharing among academics, and (ii) examine the relationship between knowledge sharing behaviour and its predictors based on the Theory of Planned Behaviour. Data were collected through an online survey using a questionnaire from academics in public universities. Using SPSS and PLS-SEM, data analysis process involved (i) analysis of descriptive statistics to evaluate knowledge sharing practice, (ii) assessment of the measurement model to evaluate items reliability and validity, and (iii) assessment of the structural model to evaluate its validity, path coefficients, and test the hypotheses. The results showed a great extent of knowledge sharing practice. They proved that academics' knowledge sharing behaviour is significantly influenced by intention, which is influenced by attitude, subjective norms, and self-efficacy. Contrary to the theory, the results showed that controllability does not influence intention.
APA, Harvard, Vancouver, ISO, and other styles
5

Benisz, Mark, John O. Willis, and Ron Dumont. "Abuses and Misuses of Intelligence Tests: Facts and Misconceptions." In Pseudoscience. The MIT Press, 2018. http://dx.doi.org/10.7551/mitpress/9780262037426.003.0016.

Full text
Abstract:
Although the term IQ is widely used in popular culture, the true definition of intelligence and how it is measured is misunderstood. We provide an overview of how the construct of intelligence and its measurement have evolved over the past century. Several of the most popular theories of intelligence as well as the controversy over the genetic basis of intelligence are reviewed. We also discuss some of the historical and contemporary misuses of intelligence test scores including some pseudoscientific applications of those scores. Some of the claims of brain training companies are debunked as are the validity of online IQ tests.
APA, Harvard, Vancouver, ISO, and other styles
6

Paul, Sharon J. "Identifying Ways We Learn." In Art & Science in the Choral Rehearsal, 87–114. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190863760.003.0005.

Full text
Abstract:
This chapter examines the validity and relevance of two frequently discussed educational theories: Multiple Intelligence Theory and Learning Styles. Howard Gardner’s Multiple Intelligence Theory encourages educators to look beyond the standard IQ test as a single measurement of a student’s potential. Rather, he encourages educators to look at students more holistically as defined by eight different intelligences. The chapter continues by explaining that scientific studies do not support the commonly held belief that students learn best through their preferred learning style. Instead, research demonstrates that information learned through multiple sensory entry points will have more triggers for recollection, thus increasing chances for recall. The author shares a variety of exercises created to take advantage of this brain principle in the choral rehearsal. This chapter further explores the brain’s affinity as a pattern-seeking device to respond to structure, and ways to use that affinity as an aid to learning.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Slosson Intelligence Test – Validity"

1

Brito da Silva, Leonardo Enzo, and Donald C. Wunsch. "Validity index-based vigilance test in adaptive resonance theory neural networks." In 2017 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2017. http://dx.doi.org/10.1109/ssci.2017.8285206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

He, Hailun, Shuang Li, and Jinbao Song. "Validity Test of the Eigenfunction Expansion Method in the Transient Wave Propagation Simulation." In Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007). IEEE, 2007. http://dx.doi.org/10.1109/snpd.2007.563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

He, Hailun, Shuang Li, and Jinbao Song. "Validity Test of the Eigenfunction Expansion Method in the Transient Wave Propagation Simulation." In Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007). IEEE, 2007. http://dx.doi.org/10.1109/snpd.2007.96.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cai, Zinuo, Jianyong Yuan, Yang Hua, Tao Song, Hao Wang, Zhengui Xue, Ningxin Hu, et al. "Themis: A Fair Evaluation Platform for Computer Vision Competitions." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/83.

Full text
Abstract:
It has become increasingly thorny for computer vision competitions to preserve fairness when participants intentionally fine-tune their models against the test datasets to improve their performance. To mitigate such unfairness, competition organizers restrict the training and evaluation process of participants' models. However, such restrictions introduce massive computation overheads for organizers and potential intellectual property leakage for participants. Thus, we propose Themis, a framework that trains a noise generator jointly with organizers and participants to prevent intentional fine-tuning by protecting test datasets from surreptitious manual labeling. Specifically, with the carefully designed noise generator, Themis adds noise to perturb test sets without twisting the performance ranking of participants' models. We evaluate the validity of Themis with a wide spectrum of real-world models and datasets. Our experimental results show that Themis effectively enforces competition fairness by precluding manual labeling of test sets and preserving the performance ranking of participants' models.
APA, Harvard, Vancouver, ISO, and other styles
5

Serrano Elena, Antonio. "METAHEURISTIC ANALYSIS IN REVERSE LOGISTICS OF WASTE." In CIT2016. Congreso de Ingeniería del Transporte. Valencia: Universitat Politècnica València, 2016. http://dx.doi.org/10.4995/cit2016.2016.3163.

Full text
Abstract:
This paper focuses in the use of search metaheuristic techniques on a dynamic and deterministic model to analyze and solve cost optimization problems and location in reverse logistics, within the field of municipal waste management of Málaga (Spain). In this work we have selected two metaheuristic techniques having relevance in present research, to test the validity of the proposed approach: an important technique for its international presence as is the Genetic Algorithm (GA) and another interesting technique that works with swarm intelligence as is the Particles Swarm Optimization (PSO). These metaheuristic techniques will be used to solve cost optimization problems and location of MSW recovery facilities (transfer centers and treatment plants).DOI: http://dx.doi.org/10.4995/CIT2016.2016.3163
APA, Harvard, Vancouver, ISO, and other styles
6

Watson, Matt, Jeremy Sheldon, Sanket Amin, Hyungdae Lee, Carl Byington, and Michael Begin. "A Comprehensive High Frequency Vibration Monitoring System for Incipient Fault Detection and Isolation of Gears, Bearings and Shafts/Couplings in Turbine Engines and Accessories." In ASME Turbo Expo 2007: Power for Land, Sea, and Air. ASMEDC, 2007. http://dx.doi.org/10.1115/gt2007-27660.

Full text
Abstract:
The authors have developed a comprehensive, high frequency (1–100 kHz) vibration monitoring system for incipient fault detection of critical rotating components within engines, drive trains, and generators. The high frequency system collects and analyzes vibration data to estimate the current condition of rotary components; detects and isolates anomalous behavior to a particular bearing, gear, shaft or coupling; and assesses the severity of the fault in the isolated faulty component. The system uses either single/multiple accelerometers, mounted on externally accessible locations, or non-contact vibration monitoring sensors to collect data. While there are published instances of vibration monitoring algorithms for bearing or gear fault detection, there are no comprehensive techniques that provide incipient fault detection and isolation in complex machinery with multiple rotary and drive train components. The author’s techniques provide an algorithm-driven system that fulfills this need. The concept at the core of high frequency vibration monitoring for incipient fault detection is the ability of high frequency regions of the signal to transmit information related to component failures during the fault inception stage. Unlike high frequency regions, the lower frequency regions of vibration data have a high machinery noise floor that often masks the incipient fault signature. The low frequency signal reacts to the fault only when fault levels are high enough for the signal to rise over the machinery noise floor. The developed vibration monitoring system therefore utilizes high frequency vibration data to provide a quantitative assessment of the current health of each component. The system sequentially ascertains sensor validity, extracts multiple statistical, time, and frequency domain features from broadband data, fuses these features, and acts upon this information to isolate faults in a particular gear, bearing, or shaft. The techniques are based on concepts like mechanical transmissibility of structures and sensors, statistical signal processing, demodulation, time synchronous averaging, artificial intelligence, failure modes, and faulty vs. healthy vibration behavior for rotating components. The system exploits common aspects of vibration monitoring algorithms, as applicable to all of the monitored components, to reduce algorithm complexity and computational cost. To isolate anomalous behavior to a particular gear, bearing, shaft, or coupling, the system uses design information and knowledge of the degradation process in these components. This system can function with Commercial Off-The-Shelf (COTS) data acquisition and processing systems or can be adapted to aircraft on-board hardware. The authors have successfully tested this system on a wide variety of test stands and aircraft engine test cells through seeded fault and fault progression tests, as described herein. Verification and Validation (V&V) of the algorithms is also addressed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography