Loading...
Loading...
Loading...
SIS Journal of Projective Psychology & Mental Health
👀 18 Reading Now
🌍 21,208 Global Reach
Support Our Mission

ad@dubay.bz

(907) 223 1088

Back to Case Studies

Case 54: Two Contemporary Rorschach Systems: Views of Two Experienced Rorschachers on the CS and R-PAS

Published: March 17, 2026

Two Contemporary Rorschach Systems: Views of Two Experienced Rorschachers on the CS and R-PAS

Anthony D. Bram and Jed Yalof

          There are two prominent evidence-based systems of the Rorschach currently in practice: the Comprehensive System (CS; Exner, 2003), which dominated the field for nearly four decades, and the more recently published Rorschach Performance Assessment System (R-PAS; Meyer, Viglione, Mihura, Erard, & Erdberg, 2011). The existence of two systems has raised questions and dilemmas for assessment clinicians and educators. In this article, as experienced psychoanalytic assessors and supervisors, we offer commentary regarding our view on the differences between the CS and R-PAS, their strengths and limitations, the tradeoffs inherent in choosing one system over the other, and suggestions for advancing the Rorschach forward in a way that is meaningful for practitioners and students involved in clinical personality assessment.

          These are challenging times for depth- oriented personality assessment and its practitioners, exacerbating concerns for the future of the field. Graduate psychology programs continue to marginalize and eliminate training in traditional performance- based (projective) assessment (Piotrowski, 2015), notably the Rorschach. Despite the fact that multimethod assessment in clinical practice—involving the systematic integration of self-report, collateral-report, and performance-based data—is widely accepted and valued (Bram & Peebles, 2014; Hopwood & Bornstein, 2014; Weiner & Greene, 2008), the mainstream of clinical psychology, which shapes training and thus the future practice of psychology (see Piotrowski, 2017), has shown a trend toward marginalizing the research evidence-base of performance-based personality  assessment. Indeed, a consortium of psychology researchers (Kotov et al. 2017) recently proposed a new quantitative, evidence-based, diagnostic scheme for mental health, and the authors included numerous examples of assessment methodologies; not surprisingly, there was no mention at all of any performance-based personality measures. Recent findings of Wright et al.’s (2017) survey has raised similar concerns that despite lip service to multimethod assessment, graduate programs restrict training in self-report and performance-based personality measures (see also Waugh, 2016).

           At a time when depth assessment is embattled within the wider sphere of clinical psychology, many practitioners in our sub- discipline are also experiencing tension, uncertainty, and confusion associated with dilemmas around which Rorschach method to use. There are now two prominent Rorschach systems of administration, scoring, and interpretation from which to choose, the Comprehensive System (CS; Exner, 2003), which had dominated the field for close to four decades, and the more recently published Rorschach Performance Assessment System (R-PAS; Meyer, Viglione, Mihura, Erard, & Erdberg, 2011), which derived much of its direction from the CS, while introducing new ideas with respect to test procedure, coding, and interpretation. Anectodally, we and other clinicians, training directors, and supervisors (personal communications) have struggled with the question of which method to teach; at times, becoming locked into a binary, which

Anthony D. Bram, PhD, ABAP, Private Practice, Lexington, MA, Cambridge Health Alliance/Harvard Medical School, Boston Psychoanalytic Institute and Jed Yalof, PsyD, ABPP, ABAP, ABSNP¹, Immaculata University Austen Riggs Center Psychoanalytic Center of Philadelphia, Private Practice, Haverford, PA (Correspondence concerning this article to: Anthony D. Bram, PhD, 363 Massachusetts Avenue, LL#11, Lexington, MA 02420 Email: Anthony_Bram@hms.harvard.edu )

Note: Jed Yalof received support for this project from the Immaculata University College of Graduate Studies, Graduate Enhancement Model.

Keywords: CS and R-PAS,

we view as necessary in delineating differences, but potentially divisive in foreclosing discussion, given that the one of these two Rorschach may be more or less patient-focused, depending on the referral goals.

In this article, after a very brief summary of the historical context for the emergence of R- PAS, we describe our efforts as clinicians– more specifically as experienced psychoanalysts and assessors (Bram & Yalof, 2015)—to consider the merits and weaknesses of each system, tradeoffs in selecting one over the other, clinical opportunities for integration, and conditions under which each system may be more or less beneficial. Based on our personal views, we will also describe implications for further developing RPAS (we cannot do the same for the CS because that system has been frozen by Exner’s heirs).

Brief Historical Context:

The second author (Yalof, 2017) published an essay in the online R-PAS Community Forum and discussed some of the differences between R-PAS and the CS that can affect clinician decision-making about which test to use. Yalof addressed (a) R-PAS’s controlling for number of responses, (b) R-PAS’s inclusion of new variables and elimination of other CS variables based on meta-analytic findings (Mihura, Meyer, Dumitrascu, &Bombel 2013), (c) R-PAS’s norms being derived from CS norms, despite different administration procedures, and (d) the challenge of psychologically transitioning to R- PAS when feeling loyal to CS. The R-PAS team (Mihura, Meyer, & Viglione, 2017) responded to Yalof’s commentary by noting their loyalty to CS, but also sharing how the historical development of R-PAS emerged from the rigor of questions raised by Exner in a 1997 Alumni Newsletter, when describing the mission of the Rorschach Research Council, and the reason to move forward with R-PAS following Exner’s death in 2006 and

his heirs’ wishes to leave the CS unchanged. The Council was involved with various developments with CS procedures and variables which, after Exner passed, were incorporated into R-PAS, thus bridging R-PAS to CS both in continuity of ideas and mission. Moreover, the Rorschach meta-analytic study that led to decisions about variables and coding, began when Mihura joined the Council, thereby further highlighting the loyalty to CS and its research, driven initially by immersion in CS and championed further by R-PAS. These two essays lead us to a discussion of some key differences between CS and R-PAS.

Summary of Key Differences between CS and R-PAS:

R-PAS deviates from the CS in a number of ways (see Diener, 2013). Here, we briefly highlight some of the most meaningful changes from CS to R-PAS. Most notably, in the interest of enhancing the validity of Rorschach summary variables to aid interpretation, the format of Rorschach administration has been altered in R-PAS to increase the likelihood of eliciting an optimal number (18-27) of responses. With respect to coding, R-PAS offers a revision of the CS Form Quality (FQ) tables; eliminates coding form dominance of achromatic color, shading, and reflection responses; eliminates 11 content categories; adds thematic codes for aggressive content, implicit dependency, and object representations. To aid interpretation of Rorschach variables, R-PAS converts raw scores or ratios to a standard score with mean of 100 and standard deviation of 15, akin to the Wechsler (e.g., 2008) intelligence tests. Meyer et al. (2011) described their statistical modeling procedures for generating norms from an international reference sample that was collected using CS administration procedures.

Commentary:

Our commentary on differences between the CS and R-PAS will address considerations related to administration, coding, summary scores and indices, and interpretation.

Administration:

           The differences in administrations between the CS and R-PAS involve tradeoffs of advantages and sacrifices. At the most basic levels of pragmatics and time and cost efficiency, clinician and patient friendliness, and interpretability of summary scores, there is much to like about the R-Optimized (R-Opt) administration of R-PAS. R-PAS’s initial instructions to the patient are to ―give two, maybe three responses to each card.‖ In turn, clinicians are instructed to ―prompt for two responses‖ and ―pull after four‖ for each card. These changes offer a welcome relief to those of us who, when using the CS, are filled with dread when it is looking like the patient will not give the requisite 14 responses or, alternatively, offer an inordinate number of responses, making scoring that much more difficult. Both sources of dread involve concerns about the potential added time to and thus cost (whether passed on to the patient or other payers or not) of testing. Many assessors, experienced with the CS, might relate to our experience along these lines: ―I thought I left enough time for the Rorschach….Oh no, I have a therapy patient coming in 45 minutes…. Will we be done with Inquiry by then?‖ But perhaps even more dreaded is the awkward and often shaming confrontation of the patient who in on track to offer fewer than 14 response and for whom it may be necessary to re-administer the entire set of 10 cards from scratch, informing the patient that their performance was not good enough (Yalof & Rosenstein. 2014). Occasionally, with a narcissistically fragile patient, this CS requirement can disrupt the diagnostic alliance, even to the point of threatening completion of the evaluation or openness to subsequent test feedback (e.g., Bram, 2010).

           What is lost, though, in moving from the CS to R-PAS administration, is, in our opinion, the attenuation of the Rorschach’s role in the test battery of sampling psychological functioning and behavior under conditions of minimal external structure. Level of external structure—in assessment, treatment, and life—involves clarity of instructions, expectations, and rules; familiarity and predictability; and the availability of others to provide monitoring, enforcement of rules, and feedback. Historically, the Rorschach has been the epitome of the unstructured assessment method. This was carried forward in the CS, where there are no explicit guidelines for how many responses to give to each card: Patients are on their own to figure this out, and how they respond is often meaningful data—how constricted vs. activated and flooded do they become and, if so, when (e.g., beginning or end? chromatic or achromatic cards? around particular content?)? And with such constriction or activation, what happens to other aspects of ego functioning (e.g., reality testing, reasoning, affect regulation, relatedness), as assessed by careful configurational and sequence analyses (Bram & Peebles, 2014; Schafer, 1954; Weiner, 1998). From a psychometric standpoint, there are an optimal number of responses (18-27) to guide empirically-based interpretation of scores, and R-PAS instructions are designed to elicit this. But from a broader clinical perspective, particularly when referral questions are related to how much structure a patient needs in treatment or other aspects of life, the R-Opt procedure of R-PAS can undercut the Rorschach’s ability to more fully assess psychological functioning when less external structure is available. In this sense, the CS Rorschach task differs from the R-PAS task. As R-PAS administration adds meaningful degrees of structure to the Rorschach task, we believe that CS administration (and thus scoring and interpretation) is more conducive to answering such questions.

          Additionally, even though above we spoke to some relief in R-PAS’s shift away from the potentially criticizing and shaming inherent in the CS procedure for R<14 protocols of telling the patient ―There is a problem…You didn’t give enough responses‖ and requiring full re-administration, this shift is not without a sacrifice. Yalof and Rosenstein (2014) described a creative approach in which this CS procedure provided an incidental opportunity to assess a patient’s superego functioning and self-esteem regulation. How does a patient manage in the face of an authority figure telling them they did not do what was expected or was not good enough (even if what was expected was not made explicit)? Does the patient become self- critical and self-attacking? Self-protective and attacking of the examiner? Apologetic and wishing to please? How do the content and structure of responses and patient-examiner interactions vary between the initial and re- administration, and what clues do this offer about the experience of superego pressures and/or the regulation of shame? The CS administration potentially affords us this opportunity to answer such treatment-relevant questions, while R-PAS administration forecloses on it1.

Coding and Recording of Codes:

          As psychoanalysts, we were pleased with R- PAS’s addition of the clinically and theoretically meaningful and empirically- supported scores assessing implicit dependency, internalized object relations, aggressive preoccupation, and ego impairment, respectively Oral Dependent Language (ODL), Mutuality of Autonomy Health and Pathology (MAH and MAP), Aggressive Content (AGC), and the Ego Impairment Index (EII-3). Although these measures have not been added to the CS consistent with the Exner family’s wishes, reportedly Exner himself was considering doing so (Mihura, Meyer, & Viglione, 2017). So it is possible that assessors continuing to use the CS can score these adjunctively and access the R-PAS coding rules and norms for interpretation. We find the results of the Mihura et al. (2013) meta-analyses so compelling that we support wholeheartedly their recommendation that assessors using the CS focus interpretation only those variables receiving at least modest empirical support.

          In our opinion, the greatest single clinical loss if one moves completely from the CS to R- PAS involves the latter’s jettisoning of form dominance coding for shading (Y, T, V), achromatic color (C’), and reflection (r) determinants. In R-PAS, for example, rather than determine whether a response with texture (T) should be coded FT, TF, or pure T (as has been the case in CS), any response having texture would be coded as plain T. Our understanding is that the authors of R- PAS eliminated this procedure because of (1) real concerns about inter-rater reliability involved in coding form dominance (e.g., differentiating among FT, TF, and pure T) and

         (2) the fact that the each of the different the gradations of form dominance for each determinant (e.g., again FT vs. TF vs. T) are low count variables that are difficult to study empirically, so it makes more sense to aggregate all T (or Y, V, C’, or r) responses. Although we appreciate this rationale, our concern is that clinically essential aspects of such responses are not captured when these scores are collapsed. Notably, in a classic article, Kleiger (1997) described how, just as in the case of responses with color as determinant, the degree of form dominance informs us about level of ego involvement or cognition that mediates the experience of different affects represented by shading and achromatic determinants. Thus, not all shading or achromatic responses are equivalent, but the failure to code form dominance risks obliterating this fine-tuned, meaningful differentiation. Bram and Peebles (2014) provide a series of contrasting ―moth‖ responses to Card I that vividly illustrates how coding the degree of form dominance along with shading and achromatic determinants captures essential aspects of the psychological process underpinning a response (see pp. 168-170). We also acknowledge our ego-psychological bent (Rapaport, Gill, & Schafer, 1968) in which form implies a certain degree of cognitive control over the emotional aspects of the response process, and hope that additional research will support the utility of re-adding distinctions according to form dominance.

         We worry that future Rorschachers, who might only be familiar with R-PAS, are liable to not even be aware of such distinctions that are crucial to systematic configurational and sequence analysis of response structure (constellation of form quality, degree of form dominance, cognitive scores) and content (Bram & Peebles, 2014; Schafer, 1954; Weiner, 1998). In our approach to Rorschach assessment, we value and strive to integrate both nomothetic interpretation of the scores (e.g., comparing to norms, what the sum of T responses might be linked to empirically) and this kind of disciplined idiographic analysis. For one of us (JY), referral questions requiring a more nuanced assessment of affect regulation might tip the scale toward a CS administration and scoring (but also coding AGC and ODL from R-PAS). For the other (ADB), use of R-PAS routinely involves supplemental coding all shading and achromatic determinants for form dominance, employing Viglione’s (2010) stepwise guidelines for making this coding distinction reliably.

         Compared to concerns about the exclusion of form dominance, our other concerns about changes from CS to R-PAS are admittedly quibbles that may well not be of concern to many of our contemporaries, let alone future assessors. One is that the R-PAS sequence for recording codes within a response (i.e., horizontally across on the Code Sequence form) makes sense from a scoring standpoint, especially for students learning Rorschach coding, but it alters the familiar syntax of expressing the structure and content of a response. The familiar shorthand left-to-right sequence (especially the following: Location, Determinants, Form Quality, Content, Special Scores) of scoring a response is so ingrained as part of the process of configurational/sequence analyses, which plays such a central role in our interpretive approach. Our (older) brains are used to scanning code sequences in a way that sets off a complex, implicit, and explicit cognitive chain of inference-making, so we find it somewhat jarring and decelerating of our interpretive process to work with the R-PAS sequence. Most notably, in R-PAS the placing of Form Quality ahead of Determinants has been difficult to adjust to, but we also recognize that it comes much easier after practice and with the use of coding rubrics that require thinking through the process in a way that corresponds to R-PAS. One of us (ADB) routinely transposes R-PAS codes for each response back into CS-like syntax, creating a column on a customized Code Sequence page in way to facilitate configurational/ sequence analyses. Thus, W SR AnSy 2 – FM in the R-PAS sequence gets reconfigured back to W SR Sy FM- A 2. Note that it is in this added column that this clinician (ADB) also includes the aforementioned coding of form dominance for shading, achromatic color, and reflection determinants.

Computing Summary Scores and Indices:

         Both of the authors have great affinity for the CS Structural Summary, which we have studied and interpreted for many years. One of us (ADB) attributes his appreciation for and depth of knowledge about the Rorschach to an early training experience in which he was encouraged to routinely generate the Structural Summary by hand, a daunting proposition at the time. Prior to this, the process of coding, entering scores in a computer program, and generating and reading the structural and narrative printout felt tedious, mechanical and confusing amid a sea of hieroglyphics and jargon. But tallying and computing each score by hand contributed to an immersion in the data and appreciation of where the scores came from and what they meant. For example, knowing the distinction between how XA% and WDA% are computed made it possible to make meaningful, experience-near, ―conditions- under-which‖ inferences (i.e. a low XA% alongside normative WDA% would illuminate reality testing vulnerable under conditions in which the patient focuses on less relevant aspects of a situation [Dd’s]). In the case of nearly every CS score, hand calculation led to greater understanding and mastery.

         With R-PAS, however many of the scores and ratios involve algorithms too complicated to compute by hand and then convert to standard scores to plot (not to mention making adjustments for complexity), so it just is not practical to do this. Diener (2013) also notes that the R-PAS Manual does not contain all of the data points necessary to convert to standard scores, even if clinicians wanted to do this by hand. Advantages of having little choice but to rely on R-PAS’s computer scoring include eliminating sources of human error in computations and the programs’ automated translation into standard scores. To mitigate the lost training opportunity, we encourage supervisees new to the R-PAS to hand-score select variables on the R-PAS Counts and Calculations page, notably those related to Location, Form Quality, Cognitive Scores, Human Movement, Chromatic Color ratios, R8910% (akin to the Afr in CS), and Critical Contents. Although it may be overlooked within the voluminous R- PAS Manual, the authors encourage ―new learners…to understand how each variable on the Profile Pages was obtained. The summary and composite scores cannot be interpreted meaningfully without knowing the elements that contribute to the final score‖(Meyer et al. 2011, p. 323).

 

Interpretation:

          We believe that with R-PAS, conversion to standard scores and their visual presentation on whisker plots makes interpretation of scores relative to norms accessible to more clinicians. In contrast, we believe that most clinicians using the CS—who do rely on computer calculation of the Structural Summary and on the associated narrative interpretive report—have less of a routine opportunity to consider and learn the meaning and interpretation of each specific score and ratio. Our opinion is that this may lead, if only for time-saving, to an over-reliance on the jargon-laden narrative printouts, canned passages of which are apt to be paraphrased, if not outright pasted into, test reports. Understanding the psychological rationale behind test variables allows for much more

flexibility with interpretations and comprehensible inferences. We hope and expect that R-PAS’s use of standard scores and whisker plots, as well as the test manual’s (Meyer et al., 2011) clear, experience-near language describing the interpretive meaning of variables, will facilitate deeper interpretive understanding of Rorschach scores to a degree, and for more trainees and clinicians, that will outweigh the aforementioned loss of the relative ease of hand-scoring we experience with the CS.

          Another major strength of R-PAS for interpretation, compared to the CS, is its building in methodology to consider and adjust scores based on the overall Complexity of a protocol (i.e., taking into account such variables as number of responses, use of determinants other than pure Form, blends, and synthesized responses). Previous factor analytic studies of Rorschach variables had revealed Complexity as the factor accounting for the most variance (Meyer et al. 2011). Being able to adjust for Complexity helps us put raw scores and ratios into greater interpretive perspective. For example, in a highly complex protocol, a lower raw score on SumH might take on even more meaning (in terms of lack of interest in people) versus in a low-complexity protocol in which the same low raw score might be less surprising and more indicative of cognitive or motivational/self- protective factors affecting engagement in the Rorschach task. In the first instance, we know that the patient was sufficiently engaged in and capable of the task, so we are more confident that the conventional interpretive meaning of the low SumH can be accepted with (this results in a Complexity Adjustment that further lowers the standard score); in the second, we are not as clear whether the low SumH is more an artifact of cognitive limitations or self-protective constriction on the task (and the Complexity Adjusted standard score would be moved up closer to the mean). Without taking into account such Complexity Adjustments as is built into R-PAS, assessors are more vulnerable to be misled in their interpretation of a given score.

 

Concluding Comments:

         We have tried to convey our sense of the strengths and weaknesses of the CS and R- PAS, highlighting that choosing one system over another inherently involves the tradeoff of unique and clinically meaningful attributes of one for the other. We believe that, on the whole, R-PAS is better situated for moving Rorschach assessment into the future and training new assessors, if for no reason other than changes to the system, which would be a normal part of research, are not permissible in the CS. Yet, arguably and paradoxically, though, we assert that at this time the CS has never been on firmer ground, particularly if assessors focus on the variables with most empirical support (Mihura et al. 2013); make of use of the most recent international norms (Meyer, Erdberg, & Shaffer, 2007); and access Viglione’s (2010) guide to complement Exner’s text and workbook (2003; Exner et al., 2001) to enhance coding reliability. Research and clinical application related to the CS remains vibrant (e.g., Ilonen, Salokangas, & Turku Study Group, 2016; Tibon-Czopp& Weiner, 2016).

Decisions about which Rorschach approach to use will ultimately fall to the clinician’s judgment. For example, when considering referrals involving questions about how a patient will respond to and make use of a treatment with less external structure (as defined above), such as under what conditions a patient might make use of psychoanalysis (Peebles-Kleiger, Horwitz, Kleiger, &Waugamann, 2006), or how someone is likely to function independently in college with loss of familiar parental and classroom routine and accountability, or whether a patient might require a residential versus outpatient treatment setting. In such instances, we are inclined to opt for the CS because its conditions of administration more closely sample a patient’s psychological functioning when one is more on one’s own to figure out how to respond adaptively to a complex and less certain situation.

Future modifications in the Rorschach, however, rest solely on the shoulders of R- PAS and we conclude by offering some ideas, many involving integrating form dominance coding as it is practiced in the CS into R-PAS.

  1. We encourage research into what extent the inter-rater reliability of form dominance for shading, achromatic color, and reflections can be bolstered through the use of Viglione’s (2010) guidelines.
  2. Assuming that form dominance can be shown to be coded reliably, we wonder about the possibility of reviving and empirically studying a variable along the lines of what Rapaport et al. (1968) referred to as ―the new F% [italics original],‖ the percentage of responses ―in which F is either the sole or principle determinant‖ (p. 339)2 . This would be an omnibus Form Dominance percentage (FDom%) that makes sense conceptually as a putative measure of ego involvement. But such a variable would be potentially meaningful only if form dominance were able to be scored on not only color but also achromatic color, shading, and reflection responses (see #1 above).
  3. Also contingent on improved inter-rater reliability of form dominance for achromatic color and shading responses (#1 above, but not necessarily having to wait for research to come in on #2), R-PAS might formalize the option and guidelines (similar to Viglione, 2010) for coding form dominance for Y, T, V, C, and r. Even if this distinction were not reflected on the Counts and Calculations or Summary Scores and Profiles pages of the R- PAS printout, including form dominance in the R-PAS Code Sequence would provide valuable interpretive information at the response, configuration, and sequence level (Bram & Peebles, 2014; Schafer, 1954;

Weiner, 1998).

  1. To facilitate more fine-tuned ―conditions under which‖ analyses and inferences, we propose the possibility that the R-PAS computerized scoring program be revised so assessors can request more specific breakdowns of the data, which we currently compute by hand (e.g., see Bram & Peebles, 2014, Table 5.1, p. 163). So, for instance, if we were interested in understanding to what extent a patient’s reality

testing and reasoning are impacted by more emotional stirring, we could request separate summary scores of Form Quality ratios and Cognitive Scores, broken down by chromatic versus achromatic cards. There are many such creative and meaningful ways to analyze conditional variability in a person’s functioning, whether it is by location, human versus non- human content, the presence or absence of aggressive or other critical contents.

  1. Although it is accepted that one of the greatest

strengths of R-PAS and the CS is its empirical assessment of thought disorder using the Cognitive (Special) Scores (Mihura et al., 2013), we agree with Kleiger (2017) that there is room for further refinement. Notably, this would involve studying whether it is meaningful to code severity levels for all Cognitive Scores (not only DR, DV, INC, and FAB but also PEC and CONTAM), as in practice we note responses vary quite a bit in their level of disturbance, which is not reflected in either the scoring itself or in the WSumCog. In addition, our experience as teachers, supervisors, and colleagues tells us that the Deviant Response (DR) remains a particularly problematic one in terms of coding reliability in actual practice. This is despite our belief that the R-PAS manual (Meyer et al. 2011) has added useful coding clarifications beyond those offered by the CS. We also retain conceptual concerns (articulated by Kleiger& Peebles-Kleiger, 1993) that DR is a mixed category reflecting many types of thinking problems that, when aggregated, are difficult to make sense interpretively. We believe that even if the current DR scoring remains, R-PAS researchers might study the over-embellished type of DR’s that in the Rapaport system would be considered confabulations, which have more specific and meaningful clinical implications (Bram & Peebles, 2014; Kleiger, 2017).

Whether or not other assessment practitioners and researchers agree with our commentary, we assert that developing facility with one or both of these empirically-based systems is a necessary but not sufficient requirement for using the Rorschach clinically. Assessors still must also be grounded in sound principles of inference-making  and  data  integration,

understand the patient-examiner relationship, be familiar with various theories of personality, and appreciate how to make meaningful links between test data and treatment.

Endnote:

1In the unlikely event that an R-PAS administration yields too few responses (defined as 15 or fewer) in the Response Phase, the examiner is to go through the cards a second time and encourage additional responses, which are then added to the original responses (i.e., so there is not a complete re-administration as is done in the CS).

2We consider most M responses, in which form is implicit, to be form dominant. In addition, for blend responses, form dominance would be tallied only if all determinants within the blend are form dominant (see Bram & Peebles, 2014, p. 157, Footnote 7). Note that there has only been one empirical study that examined Rapaport et al.’s ―new F%‖ variable: In a very small (N=10) sample, Gardner (1951) did not find the predicted association of this variable with measures of impulsivity. The authors thank Dr. Joni Mihura for calling our attention to this research.

 

References:

Bram, A. D. (2010). The relevance of the Rorschach and patient-examiner relationship in treatment planning and outcome assessment. Journal of Personality Assessment, 92(2), 91-115.

Bram, A.D., & Peebles, M.J. (2014). Psychological testing that matters: Creating a road map for effective treatment. Washington, DC: American Psychological Association.

Bram, A.D., & Yalof, J. (2015).Quantifying complexity: Personality assessment and its relationship with psychoanalysis. Psychoanalytic Inquiry, 35: sup1, 74-97.

Diener, M. J. (2013, Winter). Focus on clinical practice - Review of 'An Introduction to the Rorschach Performance Assessment System (R-PAS)'. Independent Practitioner, 12-14.

Exner, J. E. (2003). The Rorschach: A Comprehensive System Vol. 1: Basic foundations and principles of interpretation (4th ed.). New York, NY: Wiley.

Exner, J. E., Colligan, S. C., Hillman, L. B., Metts, A. S.,

Ritzler, B. A., Rogers, K. T., Sciara, A.D., & Viglione, D. J. (2001). A Rorschach workbook for the comprehensive system (5th ed.). Asheville, NC: Rorschach Workshops.

Gardner, R.W. (1951). Impulsivity as indicated by Rorschach test factors. Journal of Consulting Psychology, 15(6), 464-468.

Hopwood, C.J., & Bornstein, R. F. (Eds.).(2014).Multimethod clinical assessment. New York, NY: Guilford Press

IIonen, T, Salokangas, R.K.R., & Turku Study Group (2017). The Rorschach Coping Deficit Index as an indicator of neurocognitive dysfunction. Rorschachiana, 37(1), 28-40.

Kleiger, J. H. (1997). Rorschach shading responses: from a printer's error to an integrated psychoanalytic paradigm. Journal of Personality Assessment, 69, 342-364.

Kleiger, J. H. (2017). Rorschach assessment of psychotic phenomena. New York, NY: Routledge.

Kleiger, J., & Peebles-Kleiger, M. J. (1993).Toward a conceptual understanding of the Deviant Response in the Comprehensive Rorschach System. Journal of Personality Assessment, 60, 74-90.

Kotov, R., Krueger, R. F., Watson, D., Achenbach, T. M., Althoff, R. R., Bagby, R.M.,…Zimmerman, M. (2017). The Hierarchical Taxonomy of Psychopathology (HiTOP): A dimensional alternative to traditional nosologies. Journal of Abnormal Psychology, 126(4), 454-477.

Meyer, G. J., Erdberg, P., & Shaffer, T. W. (2007).Toward international normative reference data for the Comprehensive System. Journal of Personality Assessment, 89(S1), S201- S216.

Meyer, G. J., Viglione, D. J., Mihura, J. L., Erard, R. E., &Erdberg, P. (2011). Rorschach Performance Assessment System: Administration, coding, interpretation, and technical manual. Toledo, OH: Rorschach Performance Assessment System, L.L.C.

Mihura, J. L., Meyer, G. J., & Viglione, D. J. (2017, March 9). R-PAS Response to Yalof's Essay ―CS or R- PAS: The Travails of a Rorschach Ambitendent‖. R-PAS Newsletter, 22.

Mihura, J.L., Meyer, G.J., Dumitrascu, N., & Bombel, G. (2013).The validity of individual Rorschach variables: Systematic reviews and meta-analyses of the Comprehensive System. Psychological Bulletin, 139(3), 548-605.

Peebles-Kleiger, M. J., Horwitz, L., Kleiger, J. H., & Waugaman, R. M. (2006). Psychological testing and analyzability: Breathing new life into an old issue. Psychoanalytic Psychology, 23, 504-526.

Piotrowski, C. (2015). Clinical instruction on projective techniques in the USA: A review of academic training settings 1995-2014. Journal of Projective Psychology & Mental Health, 22, 83-92.

Piotrowski, C. (2017). The linchpin on the future of projective techniques: The precarious status of personality assessment in the (overcrowded) professional psychology curriculum. Journal of Projective Psychology and Mental Health, 24, 71- 73.

Rapaport, D., Gill, M., & Schafer, R. (1968). Diagnostic psychological testing (rev. ed.). New York, NY: International Universities Press.

Schafer, R. (1954). Psychoanalytic interpretation of Rorschach testing. New York, NY: Grune & Stratton.

Tibon-Czopp, S., & Weiner, I. B. (2016). Rorschach assessment of adolescents: Theory, research, and practice. NY: Springer.

Viglione, D.J. (2010). Rorschach coding solutions: A reference guide for the Comprehensive System (2nd ed.). San Diego, CA: Author.

Waugh, M.H. (2016). Clinical pearls in psychological assessment: Part III: Paradigms in contemporary personality assessment. SPA Exchange, 28(1), 13-16.

Wechsler, D. (2008). Wechsler Adult Intelligence Scale- Fourth Edition (WAIS-IV). San Antonio, TX: The Psychological Corporation.

Weiner, I. B. (1998).Principles of Rorschach interpretation. Mahwah, NJ: Lawrence Erlbaum.

Weiner, I.B., & Greene, R. (Eds.).(2008). Handbook of personality assessment. Hoboken, NJ: Wiley & Sons.

Wright, C.V., Beattie, S.G., Galper, D.I., Church, A.S., Bufka, L.F., Brabender, V.M., & Smith, B.L. (2017). Assessment practices of professional psychologists: Results of a national survey. Professional Psychology: Research

and Practice, 48(2), 73-78.

Yalof, J. (2017, March 9). CS or R-PAS: The travails of a Rorschach ambitendent. Rorschach Performance Assessment System (R-PAS), R-PAS Newsletter, 22.

Yalof, J., & Rosenstein, D. (2014). Psychoanalytic interpretation of superego functioning following CS re-administration procedures: Case illustration. Journal of Personality Assessment, 96(2), 192-203.

About Us

Mental Health Service is our passion. We aim to help any and every human being in need regardless of race, religion, country or financial status.

Our Sponsors

We gratefully acknowledge the support of our sponsors.

© 2026 Somatic Inkblots. All Rights Reserved.