To see the other types of publications on this topic, follow the link: Wonderlic personnel test.

Journal articles on the topic 'Wonderlic personnel test'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 23 journal articles for your research on the topic 'Wonderlic personnel test.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Matthews, T. Darin, and Kerry S. Lassiter. "What Does the Wonderlic Personnel Test Measure?" Psychological Reports 100, no. 3 (June 2007): 707–12. http://dx.doi.org/10.2466/pr0.100.3.707-712.

Full text
Abstract:
The present investigation examined the concurrent validity of the Wonderlic Personnel Test and Woodcock-Johnson–Revised Tests of Cognitive Ability which were administered to 37 college students, 27 women and 10 men, who ranged in age from 18 to 54 years ( M = 27.1, SD = 8.7). Analysis yielded significant correlation coefficients between the Wonderlic Total score and the score for the WJ–R Broad Cognitive Ability Standard Battery ( r = .55) and the Comprehensive Knowledge score ( r = .34). Performance on the Wonderlic was not significantly correlated with fluid reasoning skills ( r = .26) but was most strongly associated with overall intellectual functioning, as measured by the Woodcock-Johnson Standard Battery IQ score. While scores on the Wonderlic were more strongly associated with crystallized than fluid reasoning abilities, the Wonderlic test scores did not clearly show convergent and divergent validity evidence across these two broad domains of cognitive ability.
APA, Harvard, Vancouver, ISO, and other styles
2

McKelvie, Stuart J. "Validity and Reliability Findings for an Experimental Short Form of the Wonderlic Personnel Test in an Academic Setting." Psychological Reports 75, no. 2 (October 1994): 907–10. http://dx.doi.org/10.2466/pr0.1994.75.2.907.

Full text
Abstract:
For 225 undergraduates, scores on an untimed experimental short form of the Wonderlic Personnel Test were correlated .285 with grades. Three-week predictive validity against the Wonderlic test was .545 ( n = 86). Three-week test-retest and alternate-form reliabilities for the short form were .658 and .722 ( ns = 86, 85), respectively. The untimed short form is not judged to be superior to the original Wonderlic test in an academic setting.
APA, Harvard, Vancouver, ISO, and other styles
3

Tarigan, Medianta, and Fadillah Fadillah. "Analisa Item Response Theory Wonderlic Personnel Test (WPT)." Jurnal Pengukuran Psikologi dan Pendidikan Indonesia (JP3I) 8, no. 1 (November 25, 2019): 37–45. http://dx.doi.org/10.15408/jp3i.v8i1.10819.

Full text
Abstract:
AbstractWonderlic Personnel Test (WPT) is a psychology tool that measures individual cognitive abilities based on measuring the level of learning ability, understanding the instruction, and solving the problems. In this study, WPT items were tested using the Item Response Theory (IRT) method. There were 374 participating subjects and the results of the study showed 31 items are fit with the model, while 19 items were misfit. According to the IRT 2PL model analysis, mean of examinee ability was -0.01 (SD=1.19). The mean of difficulty (b) was 0.48 (SD=2.58) and meand of discriminant (a) was 0.62 (SD=0.38). WPT test is indicated to consist of items were misfit, that do not measure the same dimension. These statistical results are in line with the characteristics of WPT which are built from three abilities to measure intelligence.AbstrakWonderlic Personnel Test (WPT) merupakan alat ukur psikologi yang mengukur kemampuan koginitif berdasarkan pada pengukuran tingkat kemampuan belajar, memahami instruksi, dan memecahkan masalah. Dalam penelitian ini dilakukan uji terhadap aitem WPT dengan metode Item Response Theory (IRT). Terdapat 374 subjek yang berpartisipasi dan hasil penelitian menunjukkan 31 aitem sesuai dengan model, sedangkan 19 aitem lagi tidak sesuai. Menurut analisis IRT model 2PL, rata-rata kemampuan peserta adalah -0.01 (SD=1.19). Sedangkan untuk rata-rata tingkat kesukaran (b) sebesar 0.48 (SD=2.58) dan rata-rata daya beda (a) sebesar 0.62 (SD=0.38). Tes WPT diindikasikan terdiri dari aitem yang tidak mengukur satu dimensi yang sama. Hasil statistik ini sejalan dengan karakteristik WPT yang dibangun dari tiga kemampuan untuk mengukur tingkat kecerdasan.
APA, Harvard, Vancouver, ISO, and other styles
4

Kusdiyati, Sulisworo. "STUDI KORELASI WPT ( WONDERLIC PERSONNEL TEST ) DAN IST ( INTELLIGENZ STRUCTUR TEST )." Psympathic : Jurnal Ilmiah Psikologi 3, no. 1 (February 27, 2018): 59–76. http://dx.doi.org/10.15575/psy.v3i1.2177.

Full text
Abstract:
Psychological testing in high school, university and assessment of new employee were used in order to ease stakeholder for sorting and placement purpose. These tests generally include intellectual capability, personality test and performance test. There are numerous tests that can be used for this purpose and some of them quite popular among many assessor such as IST-70 and WPT. However it is known that the intelligence score as result from IST-70 and WPT were quite different. IQ score from WPT generally resulted 1-5 point lower from IST. The question now is which is from both test that considered as a valid result? This research was conducted in purpose of gathering empirical data regarding s-factor and g-factor in IST, correlation element between WPT and IST, and last differences between both tests. Thus, this research can be considered as correlation research. The assumption used in this research are (1) correlation between subtest in IST, (2) subtest in IST with WPT;(3) IQ score result with WPT and IST; and (4) differences between of IQ score resulted by WPT and IST. It was then concluded that IST measure s-factor and g-factor while WPT only measure g-factor.
APA, Harvard, Vancouver, ISO, and other styles
5

McKelvie, Stuart J. "The Wonderlic Personnel Test: Reliability and Validity in an Academic Setting." Psychological Reports 65, no. 1 (August 1989): 161–62. http://dx.doi.org/10.2466/pr0.1989.65.1.161.

Full text
Abstract:
Based on total of 290 undergraduates, the split-half reliability of the Wonderlic Personnel Test was .87 and the Pearson correlation between test score and mean grade was .21. Implications are presented for the use of this test in an academic setting.
APA, Harvard, Vancouver, ISO, and other styles
6

Frisch, Michael B., and Norman S. Jessop. "Improving Wais—R Estimates with the Shipley-Hartford and Wonderlic Personnel Tests: Need to Control for Reading Ability." Psychological Reports 65, no. 3 (December 1989): 923–28. http://dx.doi.org/10.2466/pr0.1989.65.3.923.

Full text
Abstract:
The present study attempted to evaluate the effectiveness of the Shipley-Hartford and Wonderlic in the prediction of Full Scale WAIS—R IQs while controlling for subjects' reading ability. 34 psychiatric patients from the Waco VA Medical Center who had attained at least a sixth-grade reading level on the basis of the Wide Range Achievement Test—Revised were administered the WAIS—R, Shipley-Hartford and Wonderlic in random order. Significant correlations were found between Full Scale WAIS—R IQs and age-corrected WAIS—R equivalent scores for the Shipley-Hartford and the Wonderlic. The results of these and other analyses supported the view that both the Shipley-Hartford and Wonderlic are accurate in making gross estimates of Full Scale WAIS—R IQs. The results are discussed in the context of previous research, using both clinical and nonclinical samples. Clinical implications and suggestions for research, including further methodological refinements, are also discussed
APA, Harvard, Vancouver, ISO, and other styles
7

Furnham, Adrian. "Sex, IQ, and Emotional Intelligence." Psychological Reports 105, no. 3_suppl (December 2009): 1092–94. http://dx.doi.org/10.2466/pr0.105.f.1092-1094.

Full text
Abstract:
150 young bankers estimated their IQ (Academic/Cognitive Intelligence) and EQ (Emotional Intelligence) before taking an IQ test. Pearson correlations were r = .40 and .41 between IQ test (Wonderlic Personnel Test) scores ( M = 32.8) and IQ estimates ( M = 27.9) and EQ estimates, respectively. Women's mean self-estimated IQ was significantly lower than men's.
APA, Harvard, Vancouver, ISO, and other styles
8

Rosenstein, Rebecca, and Albert S. Glickman. "Type Size and Performance of the Elderly on the Wonderlic Personnel Test." Journal of Applied Gerontology 13, no. 2 (June 1994): 185–92. http://dx.doi.org/10.1177/073346489401300206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Verpaelst, Celissa C., and Lionel G. Standing. "Demand Characteristics of Music Affect Performance on the Wonderlic Personnel Test of Intelligence." Perceptual and Motor Skills 104, no. 1 (February 2007): 153–54. http://dx.doi.org/10.2466/pms.104.1.153-154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dodrill, Carl B., and Molly H. Warner. "Further studies of the Wonderlic Personnel Test as a brief measure of intelligence." Journal of Consulting and Clinical Psychology 56, no. 1 (1988): 145–47. http://dx.doi.org/10.1037/0022-006x.56.1.145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Blickle, Gerhard, Jochen Kramer, and Jan Mierke. "Telephone-Administered Intelligence Testing for Research in Work and Organizational Psychology." European Journal of Psychological Assessment 26, no. 3 (January 2010): 154–61. http://dx.doi.org/10.1027/1015-5759/a000022.

Full text
Abstract:
In a 2 × 2 experimental study, we used the Wonderlic Personnel Test (WPT) to assess the quality of intelligence testing by telephone with a sample of 210 individuals active in the world of work and compared it both inter- and intraindividually with intelligence testing by face-to-face test administration. The population median (rxx = .88) of the reliability of ordinary face-to-face-based Wonderlic test-retest reliabilities fit the present data. The pattern of relationships between the WPT and tests of verbal and emotional intelligence was equal in both modalities. The WPT showed high convergence with verbal intelligence and was orthogonal to emotional intelligence. In both experimental groups, WPT scores were positively related to the level of formal education and occupational attainment. Strengths and limitations of the study are discussed. We conclude that, given cooperative testtakers, intelligence testing by telephone is a promising alternative to traditional forms of intelligence testing in work and organizational psychological research.
APA, Harvard, Vancouver, ISO, and other styles
12

Pesta, Bryan J., Sharon Bertsch, Peter J. Poznanski, and William H. Bommer. "Sex differences on elementary cognitive tasks despite no differences on the Wonderlic Personnel Test." Personality and Individual Differences 45, no. 5 (October 2008): 429–31. http://dx.doi.org/10.1016/j.paid.2008.05.028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Edinger, Jack D., Robert H. Shipley, C. Edward Watkins, and Elliott B. Hammett. "Validity of the Wonderlic Personnel Test as a brief IQ measure in psychiatric patients." Journal of Consulting and Clinical Psychology 53, no. 6 (1985): 937–39. http://dx.doi.org/10.1037/0022-006x.53.6.937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Kennedy, Robert S., Dennis R. Baltzley, Janet J. Turnage, and Marshall B. Jones. "Factor Analysis and Predictive Validity of Microcomputer-Based Tests." Perceptual and Motor Skills 69, no. 3_suppl (December 1989): 1059–74. http://dx.doi.org/10.2466/pms.1989.69.3f.1059.

Full text
Abstract:
11 tests were selected from two microcomputer-based performance test batteries because previously these tests exhibited rapid stability (< 10 min. of practice) and high retest reliability efficiencies ( r> 0.707 for each 3 min. of testing). The battery was administered three times to each of 108 college students (48 men and 60 women) and a factor analysis was performed. Two of the three identified factors appear to be related to information processing (“encoding” and “throughput/decoding”), and the third named an “output/speed” factor. The spatial, memory, and verbal tests loaded on the “encoding” factor and included Grammatical Reasoning, Pattern Comparison, Continuous Recall, and Matrix Rotation. The “throughput/decoding” tests included perceptual/numerical tests like Math Processing, Code Substitution, and Pattern Comparison. The output speed factor was identified by Tapping and Reaction Time tests. The Wonderlic Personnel Test was group administered before the first and after the last administration of the performance tests. The multiple Rs in the total sample between combined Wonderlic as a criterion and less than 5 min. of microcomputer testing on Grammatical Reasoning and Math Processing as predictors ranged between 0.41 and 0.52 on the three test administrations. Based on these results, the authors recommend a core battery which, if time permits, would consist of two tests from each factor. Such a battery is now known to permit stable, reliable, and efficient assessment.
APA, Harvard, Vancouver, ISO, and other styles
15

Kennedy, Robert S., Dennis R. Baltzley, Janet J. Turnage, and Marshall B. Jones. "Factor Analysis and Predictive Validity of Microcomputer-Based Tests ,." Perceptual and Motor Skills 69, no. 3-2 (December 1989): 1059–74. http://dx.doi.org/10.1177/00315125890693-201.

Full text
Abstract:
11 tests were selected from two microcomputer-based performance test batteries because previously these tests exhibited rapid stability (<10 min. of practice) and high retest reliability efficiencies (r>0.707 for each 3 min. of testing). The battery was administered three times to each of 108 college students (48 men and 60 women) and a factor analysis was performed. Two of the three identified factors appear to be related to information processing (“encoding” and “throughput/ decoding”), and the third named an “output/speed” factor. The spatial, memory, and verbal tests loaded on the “encoding” factor and included Grammatical Reasoning, Pattern Comparison, Continuous Recall, and Matrix Rotation. The “throughput/ decoding” tests included perceptual/numerical tests like Math Processing, Code Substitution, and Pattern Comparison. The output speed factor was identified by Tapping and Reaction Time tests. The Wonderlic Personnel Test was group administered before the first and after the last administration of the performance tests. The multiple Rs in the total sample between combined Wonderlic as a criterion and less than 5 min. of microcomputer testing on Grammatical Reasoning and Math Processing as predictors ranged between 0.41 and 0.52 on the three test administrations. Based on these results, the authors recommend a core battery which, if time permits, would consist of two tests from each factor. Such a battery is now known to permit stable, reliable, and efficient assessment.
APA, Harvard, Vancouver, ISO, and other styles
16

Hawkins, Keith A., Stephen V. Faraone, John R. Pepple, Larry J. Seidman, and et al. "WAIS--R validation of the Wonderlic Personnel Test as a brief intelligence measure in a psychiatric sample." Psychological Assessment 2, no. 2 (1990): 198–201. http://dx.doi.org/10.1037/1040-3590.2.2.198.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Bertsch, Sharon, and Bryan J. Pesta. "The Wonderlic Personnel Test and elementary cognitive tasks as predictors of religious sectarianism, scriptural acceptance and religious questioning☆." Intelligence 37, no. 3 (May 2009): 231–37. http://dx.doi.org/10.1016/j.intell.2008.10.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

SALTZMAN, J., E. STRAUSS, M. HUNTER, and F. SPELLACY. "Validity of the Wonderlic Personnel Test as a Brief Measure of Intelligence in Individuals Referred for Evaluation of Head Injury." Archives of Clinical Neuropsychology 13, no. 7 (October 1998): 611–16. http://dx.doi.org/10.1016/s0887-6177(97)00077-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Saltzman, J., E. Strauss, M. Hunter, and F. Spellacy. "Validity of the Wonderlic Personnel Test as a Brief Measure of Intelligence in Individuals Referred for Evaluation of Head Injury." Archives of Clinical Neuropsychology 13, no. 7 (October 1, 1998): 611–16. http://dx.doi.org/10.1093/arclin/13.7.611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Whetzel, Deborah L., and Michael A. Mc Daniel. "Reliability of Validity Generalization Data Bases." Psychological Reports 63, no. 1 (August 1988): 131–34. http://dx.doi.org/10.2466/pr0.1988.63.1.131.

Full text
Abstract:
This paper addresses the usefulness of reporting coder reliability in validity generalization studies. The Principles for the Validation and Use of Personnel Selection Instruments of the Society for Industrial and Organizational Psychology state that given the results of meta-analytic studies, validities generalize far more than previously believed; however, users of validity generalization results are required to report the reliability of data entering validity generalization analyses. In response to this concern, reliability coefficients were computed on the validity and sample size between two studies (i.e., data bases) of the Wonderlic Personnel Test and the Otis Test of General Mental Ability. These variables, validity, and sample size, were investigated since these are the crucial components in validity generalization analysis. Results indicated that the correlation between the validities of the two studies was .99 and the correlation between the sample sizes of the two studies was 1.00. To illustrate further the reliability of coding in validity generalization research, separate meta-analyses were conducted on the validity of these tests on each of the two data bases. When correcting only for sampling error, the results indicated that the separate meta-analyses yielded identical results, M = .24, SD = .09. These results show that concerns about the reliability of validity generalization data bases are unwarranted and that independent investigators coding the same data, record the same values and obtain the same results.
APA, Harvard, Vancouver, ISO, and other styles
21

Blackwell, Terry L. "Test Review: Wonderlic, E. F. (1999). Wonderlic Personnel Test.™ Libertyville, IL: Wonderlic, Inc. Paper and pencil version (includes tests, user's manual, and scoring key): $95 for 25 tests, $180 for 100 tests; PC version (includes testing and scoring software and user's manual): $95 for 25 tests, $180 for 100 tests." Rehabilitation Counseling Bulletin 44, no. 3 (April 2001): 184–85. http://dx.doi.org/10.1177/003435520104400313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

SALINSKY, MARTIN C., DANIEL STORZBACH, CARL B. DODRILL, and LAURENCE M. BINDER. "Test–retest bias, reliability, and regression equations for neuropsychological measures repeated over a 12–16-week period." Journal of the International Neuropsychological Society 7, no. 5 (July 2001): 597–605. http://dx.doi.org/10.1017/s1355617701755075.

Full text
Abstract:
The interpretation of neurobehavioral change over time requires knowledge of the test–retest characteristics of the measures. Without this information it is not possible to distinguish a true change (i.e., one reflecting the occurrence or resolution of an intervening process) from that occurring on the basis of chance or systematic bias. We tested a group of 72 healthy young to middle aged adults twice over a 12-to-16-week interval in order to observe the change in scores over time when there was no known intervention. The test battery consisted of seven commonly used cognitive measures and the Profile of Mood States (POMS). Test–retest regression equations were calculated for each measure using initial performance, age, education, and a measure of general intellectual function (Wonderlic Personnel Test) as regressors. Test–retest correlations ranged from .39 (POMS Fatigue) to .89 (Digit Symbol). Cognitive measures generally yielded higher correlations than did the POMS. Univariate regressions based only on initial performance adequately predicted retest performance for the majority of measures. Age and education had a relatively minor influence. Practice effects and regression to the mean were common. These test–retest regression equations can be used to predict retest scores when there has been no known intervention. They can also be used to generate statistical statements regarding the significance of change in an individual's performance over a 12-to-16-week interval. (JINS, 2001, 7, 597–605.)
APA, Harvard, Vancouver, ISO, and other styles
23

Hiebler-Ragger, M., C. M. Perchtold-Stefan, H. F. Unterrainer, J. Fuchshuber, K. Koschutnig, L. Nausner, H. P. Kapfhammer, I. Papousek, E. M. Weiss, and A. Fink. "Lower cognitive reappraisal capacity is related to impairments in attachment and personality structure in poly-drug use: an fMRI study." Brain Imaging and Behavior, November 21, 2020. http://dx.doi.org/10.1007/s11682-020-00414-3.

Full text
Abstract:
AbstractInsecure attachment, impaired personality structure and impaired emotion regulation figure prominently in substance use disorders. While negative emotions can trigger drug-use and relapse, cognitive reappraisal may reduce emotional strain by promoting changes in perspective. In the present study, we explored behavioral and neural correlates of cognitive reappraisal in poly-drug use disorder by testing individuals’ capability to generate cognitive reappraisals for aversive events (Reappraisal Inventiveness Test). 18 inpatients with poly-drug use disorder and 16 controls completed the Adult Attachment Scale, the Emotion Regulation Questionnaire, the Brief Symptom Inventory, the Wonderlic Personnel Test, and the Operationalized Psychodynamic Diagnosis Structure Questionnaire, as well as two versions of the Reappraisal Inventiveness Test (during fMRI and outside the lab). Compared to controls, polydrug inpatients reported impaired personality structure, attachment and emotion regulation abilities. In the Reappraisal Inventiveness Test, poly-drug inpatients were less flexible and fluent in generating reappraisals for anger-eliciting situations. Corresponding to previous brain imaging evidence, cognitive reappraisal efforts of both groups were reflected in activation of left frontal regions, particularly left superior and middle frontal gyri and left supplemental motor areas. However, no group differences in neural activation patterns emerged. This suggests that despite cognitive reappraisal impairments on a behavioral level, neural reflections of these deficits in poly-drug use disorder might be more complex.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography