ad@dubay.bz
(907) 223 1088
Historically, projective methods, as a class, have been the target of extensive criticism in the scholarly literature (e.g., Lilienfeld et al., 2000). Although the Rorschach has been the prime target of this barrage of condescending scholarly attacks, reviews of the extant survey-based literature on „Test usage‟ in both practice and doctoral-level training have provided evidence that most of the primary projective tests (the Rorschach, Thematic tests, Drawing techniques) continue to be included in the assessment tool-kit of mental health professionals, including psychologists (Frauenhoffer et al., 1998; Piotrowski, 2015, 2016, 2018). One recent report, in an analysis of published studies over a 30-year time period, found that SCTs continue to be embraced by child clinicians and school psychologists for both educational and mental health assessment purposes (Piotrowski, 2017).
Thus, I found the findings reported by Benson et al. (2019) most interesting, but some of their conclusions a bit unfounded. These investigators conducted a survey on assessment practices based on a national sample of practicing school psychologists (most were members of the National Association of School Psychologists(NASP) or nationally-certified school psychologists) and compared survey results to 5 prior assessment surveys of the NASP. The findings indicated that 97% of respondents use broad-band behavior rating scales, 87% use adaptive rating instruments, and 85% focus on evaluations related to academic achievement. Moreover, 30% of professional time in school psychology practice is spent on “writing and formatting reports for individual assessments” (p. 37). These findings, reported by Benson et al., although contrary to the testing practices of prior surveys, are really not that surprising nor unexpected due to emerging trends in the assessment field and developments regarding new testing instruments (perennially introduced by both clinician-researchers and test publishers).
What was rather perplexing is the statement by Benson et al. that “Today, few projective tests of any kind are used” (p. 43) by school psychologists. Based on Benson et al.‟s own data (presented in their Table 5): 26% of their sample use Sentence Completion; 11% use the Bender-Gestalt; 16% the House-Tree- Person; 16% the Draw-A-Person; and 15% the Kinetic Family Drawings. Moreover, all these projective tests (PT) ranked among the top 100 testing instruments across a large pool of rating scales, interview formats, academic & achievement tests, adaptive measures, brief mental health scales, and broad-band personality tests. Admittedly, many of these PT were reported to be used only a handful of times each month, but the fact that these PT currently rank among a myriad of popular tests is a testament to their clinical, diagnostic, and academic value in both child and adolescent assessment. Indeed, several projective methods were ranked quite high in all of the 5 earlier assessment NASP surveys cited by Benson et al., and these PT are still among the top 100 tests noted in both Tables 5 and 6.Moreover, projective tests need to vie for attention against a perennial group of newly introduced tests and measures over time. Furthermore, given that there are thousands of tests and measures referenced in the Buros Mental Measurements Yearbook, it appears rather commendable that a number of projective tests make Benson et al.‟s list, after all these decades.
In this regard, several studies, not cited by Benson et al., are worthy of mention. First, Kennedy et al. (1994), in a national survey of school psychologists, found that a variety of assessment measures, including some projective tests, were quite popular in practice. Another recent study, on a sample of 116 NASP members, reported that school psychologists preferred tests with robust psychometric properties, but also the respondents acknowledged using projective tests in practice (see Whitney, 2011).
As Benson et al. found that Sentence completion tests(SCT) were reported as the most relied upon projective method by their sample (ranked #52 overall), I would like to focus on this particular assessment measure as a case in point. Although usage of SCT by mental health practitioners has diminished in recent years (Piotrowski, 2017), quite surprisingly, Delphi poll findings point to the general acceptance of SCT in terms of clinical credibility (see Norcross et al., 2006). Moreover, Koonce (2007), in a survey of NASP members, found that 30% relied on SCT (which was ranked 9th) in assessment of ADHD.
Since Benson et al. proffered few conclusions, based on their findings, with regard to graduate training in assessment, I would like to shed some light on this issue. Test usage in practice is partly a function of exposure during instruction and supervision (and depth of coverage) in training programs. Undoubtedly, lack of faculty interest in projective methods translates into disinterest in the next generation of school psychologists, despite suggestions noted in the recent literature on promotion of teaching PT in the school psychology curriculum (Dana, 2007; Flanagan & Esquivel, 2006; Hughes et al., 2010).
An examination of the recent literature offers some perspective on the status of SCT in professional training settings. To illustrate, Table 1 shows the emphasis on instruction with SCT across several survey-based studies of graduate-level programs, particularly internship sites, in the mental health field. These aggregated findings point to a high degree of acceptance with regard to the clinical utility of SCT by academic faculty and internship training directors.
Table 1. Recent Survey Data on SCT use in Graduate-level Professional Training
|
Study |
Method |
Findings |
|
Neukrug et al. (2013) |
Based on survey data from 210 counselor educators across the U.S., this study examined graduate-level coverage of assessment instruments by instructors |
70% of instructors reported teaching emphasis with the Rotter ISB (ranked 36th); the Forer SCT noted my 27%. |
|
Ready et al. (2016) |
236 internship directors‟ views on pre-doctoral academic training in testing and assessment |
SCTs were endorsed by about 25% of these directors; somewhat more emphasized in child clinics (by 40%). |
|
Bates (2016) |
182 APPIC internship directors‟ views on assessment/testing training practices |
Compared to other tests, SCTs ranked 14th (endorsed by 23% of directors); SCTs are relied upon in assessment by 44% of interns. |
|
Mihura et al. (2017) |
Assessment/testing training reported by 83 APA clinical psychology programs |
SCTs were „covered‟, overall, in 47% of programs; more emphasized in practitioner-focused training. |
|
Stedman (2018) |
355 APA-approved Internship directors |
Based on type of program (child, adult, mixed), interns‟ usage of SCTs ranged from 18-41%. |
With regard to research, a cursory examination of recent scholarship shows that research attention regarding traditional SCT and newly developed Sentence completion measures remains active (e.g., Huang, 2016; McGrath & Carroll, 2012; Weis et al., 2008). In
order to gain a perspective on recent research interest with PT, Table 2 displays the volume of scholarship on the top 5 projective assessment methods noted in the Benson et al. study.
Table 2. Number of References where these Projective Tests were a Main Focus of Study since 2012
|
Test |
Scholarly Articles |
Dissertations |
Books |
|
Sentence Completion |
52 |
16 |
1 |
|
Bender-Gestalt |
27 |
2 |
2 |
|
House-Tree-Person |
13 |
2 |
4 |
|
Draw-A-Person |
43 |
6 |
8 |
|
Kinetic-Family-Drawing |
5 |
1 |
7 |
I have some departing thoughts on Benson et al.‟s presentation: First, while projective tests were singled-out as „rarely used today‟, there was no mention of „rarely used‟ in reference to symptom-specific measures like the Children‟s Depression Inventory or the Children‟s Manifest Anxiety Scale or broad- band psychopathology measures like the Millon Adolescent Clinical Inventory (MACI) or the MMPI-A (all ranked below 60th place in the Benson et al. survey). Second, the statement by Benson et al. that their findings “exemplifies the evidence-based practice movement” (p.46), seems somewhat ill- founded and premature. Some of the assessment instruments, including behavior rating scales, that are highly ranked in their Table 5 have not received extensive empirical support, whereas many of the measures not highly ranked have shown solid psychometric credibility (e.g., MMPI-A, MACI, Beck Anxiety Inventory). Perhaps, Benson et al.‟s statement reflects an aspirational tone. Third, Benson et al. found extensive use of technology resources such as software/online services with regard to administration and scoring of tests. In the name of „cost & time‟ efficiency this finding is understandable, but I would caution that reliance on computer-based test interpretations (CBTI) is a separate matter. In fact, published data indicate strong concern by assessment professionals regarding the validity of CBTI reports (Spielberger & Piotrowski, 1990), which may have implications for how school psychologists utilize diagnostic decision-making in report writing (see Wilcox & Schroeder, 2015).
In conclusion, based on current evidence in the available literature, the future status of projective methods for assessment purposes may be somewhat limited, but clearly not moribund—not in the specialty of school psychology, nor in practice by mental health professionals (Garb et al., 2002; Peterson et al., 2014).Even the recent APA national survey of professional psychologists finds the Rorschach among the top 10 tests (Wright et al., 2017).That being said, I acknowledge that the reported data in the Benson et al. (2017) study provide a keen perspective on the emerging status and breadth of assessment practices, specific to the field of school psychology.
Bates, S. (2016). Internship directors‟ perspectives on psychological assessment training. Unpublished doctoral dissertation, Pepperdine University.
Benson, N.F., et al. (2019). Test use and assessment practices of school psychologists in the United States: Findings from the 2017 national survey. Journal of School Psychology, 72, 29-48.
Dana, R. (2007) Culturally competent school assessment: Performance measures of personality. Psychology in the Schools, 44(3), 229-238.
Flanagan, R., & Esquivel, G.B. (2006) Empirical and clinical methods in the assessment of personality and psychopathology: An integrative approach for training. Psychology in the Schools 43(4), 513-526.
Frauenhoffer, D., et al. (1998). Psychological test usage among licensed mental health practitioners: A multidisciplinary survey. Journal of Psychological Practice, 4, 28-33.
Garb, H.N., et al. (2002). Effective use of projective techniques in clinical practice: Let the data help with selection and interpretation. Professional Psychology: Research and Practice, 33, 454-463.
Huang, C. (2016). The development of Multifunctional Sentence Completion Test. Bulletin of Educational Psychology, 47(4), 547-579.
Hughes, T.L., et al. (2010). The importance of personality assessment in school psychology training programs. In E. Garcia-Vasquez, T.D. Crespi, & C.A. Riccio (Eds.), Handbook of education, training, and supervision of school psychologists in school and community, Vol. 1: Foundations of professional practice (pp. 185-211). New York: Routledge.
Kennedy, M.L., et al. (1994). Social-emotional assessment practices in school psychology. Journal of Psychoeducational Assessment, 12, 228-240.
Koonce, D.A. (2007). Attention deficit hyperactivity disorder assessment practices by practicing school psychologists. Journal of Psychoeducational Assessment, 25(4), 319-333.
Lilienfeld, S.O., et al. (2000). The scientific status of projective techniques. Psychological Science in the Public Interest, 1(2), 27-66.
McGrath, R.E., & Carroll, E.J. (2012). The current status of “projective tests”. In H. Cooper et al. (Eds.), APA handbook of research methods in psychology, Vol. 1: Foundations, planning, measures, and psychometrics (pp. 329-348). Washington, DC: American Psychological Association.
Mihura, J.L., et al. (2017). Psychological assessment training in clinical psychology doctoral programs. Journal of Personality Assessment, 99(2), 153-164.
Neukrug, E., et al. (2013). A national survey of assessment instruments taught by counselor educators. Counselor Education & Supervision, 52, 207-219.
Norcross, J.C., et al.(2006). Discredited psychological treatments and tests: A Delphi poll. Professional Psychology: Research and Practice, 37(5), 515-522. Peterson, C.H., et al. (2014). Assessment use by counselors in the United States: Implications for policy and practice. Journal of Counseling &
Development, 92, 90-99.
Piotrowski, C. (2018). Sentence completion methods: A summary review of 70 survey-based studies of training and professional settings. Journal of Projective Psychology & Mental Health 25(1), 60-75.
Piotrowski, C. (2017). Thematic Apperception Techniques (TAT, CAT) in assessment: A summary review of 67 survey-based studies of training and professional settings. Journal of Projective Psychology & Mental Health, 24(1), 11-24.
Piotrowski, C. (2016). Drawing techniques in assessment: A summary review of 60 survey-based studies of training and professional settings. Journal of the Indian Academy of Applied Psychology, 42(2), 220-236.
Piotrowski, C. (2016). Bender-Gestalt Test usage worldwide: A review of 30 practice-based studies. Journal of Projective Psychology & Mental Health, 23(2), 73-81.
Piotrowski, C. (2015). Projective techniques usage worldwide: A review of applied settings 1995-2015. Journal of the Indian Academy of Applied Psychology, 41(3), 9-19.
Piotrowski, C. (2015). Clinical instruction on projective techniques in the USA: A review of academic training settings 1995-2014. Journal of Projective Psychology & Mental Health, 22(2), 83-92.
Ready, R.E., et al. (2016). Psychology internship directors‟ perceptions of pre-internship training preparation in assessment. North American Journal of Psychology, 18(2), 317-334.
Spielberger, C.D., & Piotrowski, C. (1990). Clinicians‟ attitudes toward computer-based testing. The Clinical Psychologist, 43(4), 60-64.
Stedman, J.M., et al. (2018). Current patterns in personality assessment during internship. Journal of Clinical Psychology, 74(3), 398-406.
Weis, R., et al. (2008). Construct validity of the Rotter Incomplete Sentences Blank with clinic-referred and non-referred adolescents. Journal of Personality Assessment, 90(6), 564-573.
Whitney, S.R. (2011). Assessment practices of school psychologists. Unpublished doctoral dissertation, City University of New York.
Wilcox, G., & Schroeder, M. (2015). What comes before report writing? Attending to clinical reasoning and thinking errors in school psychology. Journal of Psychoeducational Assessment, 33(7), 652-661.
Wright, C.V., et al. (2017). Assessment practices of professional psychologists: Results of a national survey. Professional Psychology: Research and Practice, 48(2), 73-78.
Chris Piotrowski, Ph.D.
Senior Editor, and Research Consultant, University of West Florida,
We gratefully acknowledge the support of our sponsors.
© 2026 Somatic Inkblots. All Rights Reserved.