Tuesday, October 9, 2012

Lifting the Veil of Morality: Choice Blindness and Attitude Reversals on a Self-Transforming Survey

I have removed the references and the images. Visit the PloSOne page to view the article references and images. Abstract

Every day, thousands of polls, surveys, and rating scales are employed to elicit the attitudes of humankind. Given the ubiquitous use of these instruments, it seems we ought to have firm answers to what is measured by them, but unfortunately we do not. To help remedy this situation, we present a novel approach to investigate the nature of attitudes. We created a self-transforming paper survey of moral opinions, covering both foundational principles, and current dilemmas hotly debated in the media. This survey used a magic trick to expose participants to a reversal of their previously stated attitudes, allowing us to record whether they were prepared to endorse and argue for the opposite view of what they had stated only moments ago. The result showed that the majority of the reversals remained undetected, and a full 69% of the participants failed to detect at least one of two changes. In addition, participants often constructed coherent and unequivocal arguments supporting the opposite of their original position. These results suggest a dramatic potential for flexibility in our moral attitudes, and indicates a clear role for self-attribution and post-hoc rationalization in attitude formation and change.

Introduction

Every day, thousands of opinion polls, corporate surveys, consumer panels, government feedback forms, and psychological rating scales are employed to elicit the attitudes of humankind. But what is that is being measured with these instruments? Given the ubiquitous use of survey and polling instruments it seems we ought to have firm answers to this fundamental question, but unfortunately we do not.

The typical approach to the issue is to focus on the predictive utility of the statements people make (irrespective of whether to call them attitudes, opinions, preferences, or evaluations). Hence, psychologists have long been troubling over the fact that what we say often does not predict what we do, and have tried different methodological twists to close the gap between attitudes and behavior. Even less optimistically, in the debate over stated vs. revealed preferences, economists have often made wholesale dismissals of stated preferences in favor of market decisions.

Ideally, what researchers would like to have is a method that measured the propensity for consistency or change at the very moment of the poll (something that allows us to pre-emptively jump the attitude-behavior gap, so to speak). The standard way to approximate this goal is to complement a survey with meta-attitudinal judgements, such as perceived certainty or importance. These tools add to our predictive edge, but meta-attitudinal judgments have a tendency to fractionate into a grab bag of different factors and processes when closely scrutinized. In short, asking people to introspect and estimate their own propensity for change often assumes more self-awareness than is warranted from the evidence. Another possibility is to add some form of implicit measure, typically based on response latency. Again, this is helpful, but there is only so much information you can glean in a brief 100 msec reaction time window.

Yet, why do we have to conceive of attitude measurements primarily as reports, and not a form of interactive test or experiment? What would happen if we engaged more directly with the attitudes at hand, perhaps even challenged them? Using the phenomenon of Choice Blindness (CB) as a wedge, we have been able to separate the decisions of participants and the outcomes they are presented with. In aesthetic, gustatory and olfactory choices this has previously allowed us to demonstrate that participants often fail to notice mismatches between what they prefer and what they actually get (hence, being blind to the outcome of their choice), while nevertheless being prepared to offer introspective reasons for why they chose the way they did. But what about the backbone of attitude research, all those surveys, panels and polls? If CB held across this domain it would create significant strain for our intuitive models of attitudes (in what sense can attitudes be real if people moments later fail to notice they have been reversed?), and provide us with a novel source for understanding prediction, persuasion, and attitude change (how will participants act after they have endorsed the opposite of what they just said?).

To investigate these issues, we created a self-transforming paper questionnaire on moral attitudes using a methodology adapted from stage magic (see figure 1). The participants were given a survey on either foundational moral principles or moral issues hotly debated in the current media, and their task was to rate on a 9-point bidirectional scale to what extent they agreed or disagreed with each statement. After the participants had completed the questionnaire, we asked them to read aloud some of their answers from the first page, and to explain their ratings to us. However, unbeknownst to the participants, two of the statements they read aloud at this stage were actually the reverse of the statements they had originally rated – i.e. if the original formulation stated that “large scale governmental surveillance of e-mail and Internet traffic ought to be forbidden as a means to combat international crime and terrorism.”, it was now changed to “large scale governmental surveillance of e-mail and internet traffic ought to be permitted as a means to combat international crime and terrorism.”. As the rating was held constant but the direction of the statement was reversed, the participants’ original opinion was reversed as a consequence. Thus, this technique allowed us to expose participants to altered feedback about their previously stated attitude, and to create a situation in which we could record whether they were prepared to endorse and argue for the opposite moral view of what they stated only moments ago.

Methods

Participants

In total, 160 volunteers (100 female) participated in the study. Ages ranged from 17 to 69 years (M = 29.5, SD 10.8). We recruited the participants when they were walking through a park and asked them if they wanted to fill in a short survey about moral questions. All participants gave written informed consent to participate in the study, and all but 18 participants also agreed to have the interaction audio recorded.

Ethics Statement

The study was approved by the Lund University Ethics board, D.nr. 2008–2435.

Procedure and Materials

We presented the participants with a questionnaire containing 12 moral principles (condition one, N = 81) or statements describing 12 current moral issues (condition two, N = 79), and their task was to rate to what extent they agreed or disagreed with each statement on a 9-point bidirectional scale from 1 “completely disagree” to 9 “completely agree” (the midpoint of the scale allowed participants to be neutral or undecided about the issues). In the first condition we used fundamental moral principles adapted from Forsyth’s (1980) Ethics Position Questionnaire, such as “It is more important for a society to promote the welfare of the citizens than to protect their personal integrity”. In the second condition, we used concrete moral statements instantiating the principles from the first condition, e.g. “Large scale governmental surveillance of e-mail and Internet traffic ought to be permitted as means to combat international crime and terrorism.” (see table 1). The statements in condition two were picked to represent salient and important current dilemmas from Swedish media and societal debate at the time of the study, thus making it very likely that participants would have been exposed to prior information about the issues they were asked to express their attitudes on. In this way, we could create a contrast between foundational principles, what many suppose is the core of our moral beings, and the everyday manifestation of these principles in current issues, and investigate whether levels of abstraction would influence detection of the manipulations. Intuitively, we would expect abstract principles to allow for more exceptions and qualifications (a feature of abstractness as such), thus engendering lower levels of detection in this condition.

In addition, we asked the participants to indicate how strong their moral opinions in general were, and if they were politically active or not, as well as their age and gender.

The questionnaire was attached to a clipboard, with the questions distributed over two pages. After completing the survey, we asked the participants to read aloud and discuss three of their ratings from the first page (randomly taken from a limited subset of the principles or statements), and also if it would be possible to audio-record this discussion. If the participants did not want to be recorded, the experiment leader took notes and made the necessary classifications immediately after the trial was completed. As previously explained, at this point two of the statements the participants read aloud had been reversed compared to the statements they had originally rated. When the participants had read the statement, we interjected and summarized their attitudes in a question by saying “so you don’t agree that [statement]?” or “so you do agree that [statement]?” to avoid any misunderstanding of what the rating implied. The reversal was achieved by attaching a lightly glued paper-slip on the first page of the questionnaire, containing the original version of the statements. The layout and shape of the attached slip allow it to blend in perfectly with the background sheet. When the participants folded the first page over the back of the clipboard, the paper-slip stuck on an even stickier patch on the backside of the questionnaire, thus revealing a new set of statements on the first page (see figure 1).

Measures

All manipulated trials were categorized as either corrected or accepted. In the trials categorized as corrected, the participants either noticed the change immediately after reading the manipulated statement (spontaneous detection), or claimed in the debriefing session to have felt something to be wrong when reading the manipulated sentence (retrospective correction). In detail, we classified any trial as spontaneously detected if the participants showed any signs of having detected the change after reading the manipulated statement, e.g. if they corrected or reversed their rating to match their original position, or if they thought they must have misunderstood the question the first time they read it, etc. Most of the participants who immediately detected the manipulation also corrected the rating by reversing the position on the scale, i.e. had they rated their agreement to 1 (completely disagree) they changed it to 9 (completely agree) (although 10% of these trials were changed to a different number than the exact opposite). After the experiment, the participants were fully debriefed about the true purpose of the experiment. In this interview session we presented a series of increasingly specific questions about the experiment. Firstly we asked the participants in general what they thought about the questionnaire. Secondly, we asked if they had experienced anything as being strange or odd with the questionnaire. Finally, we showed them exactly how we had reversed some of the statements the second time, and asked whether they had felt a suspicion about anything like this during their responding. If at any point during this process they indicated they had felt something to be wrong when reading and responding to the manipulated statements, we asked them to point out which statements had been altered, and categorized these trials as retrospectively corrected. Consequently, in the trials categorized as accepted, there were no such signs of the participants having noticed that the opinions they argued for after the manipulation was the reversal of what they originally intended.

To create the most accurate representation of the number of detected trials we tried to create an experimental context with as little reluctance or possible awkwardness for the participants in correcting a manipulation as possible. In doing so, we stressed from the outset of the study that there were no time constraints for answering, that we had no moral or political agenda, and that we would not judge or argue their opinions in any way. Furthermore, the magic trick made the manipulation as such radically nontransparent, and thus it was near impossible for the participants to deduce the underlying intent of the study, and adapt their answers to please the experimenters. But at the same time, the design made it very easy and natural to correct any errors, as everyone is familiar with occasionally misreading or marking the wrong box on a form or survey. Similarly, in the debriefing session, our aim was to provide a sensitive and inclusive estimate of corrections, by giving the participants multiple opportunities with increasingly stronger cues to report any suspicions. If anything, we contend, the incentives of the final debriefing question encourages over-reporting of detections for those participants that do not want to admit to having accepted and argued for the reverse of their original rating. Our experience from prior studies of the CB phenomenon is that the category of detections and non-detections are sharply divided by the level of surprise and curiosity experienced by the participants in the debriefing sessions. It seems highly unlikely to us that participants are systematically withholding their feelings of detections, while at the same time acting as if they are genuinely surprised and curious about our explanation of the manipulations.

As the scale the participants used when rating their agreement with the moral principles or statements was bidirectional from 1 “completely disagree” to 9 “completely agree”, the midpoint (5) of the scale allowed participants to be neutral or undecided about the issues. As a consequence, in trials when the participants rated themselves to be neutral, the manipulated reversal of the principle or statement did not affect the participants’ stated opinion (i.e. they were still neutral). All such trials were removed from the analysis (36 of 320 M trials). An additional 13 M trials were removed due to technical failures in the manipulation process.

All the recordings of the participants’ argumentation were transcribed using the CHAT format, developed for the CLAN software. The direction and strength of the argumentation was estimated by three independent raters, and to avoid any bias in the classification of the verbal reports, all statements made by the experimenters, as well as any explicit mention of the direction of the rating, was removed from the transcripts.

Results

Corrected Trials

There were no differences in correction or acceptance rate when comparing the individual principles or statements in each condition. The result of each condition is therefore presented as a combined measure for the principles or statements being manipulated.

The majority of the manipulated trials remained undetected. In condition one, about one third of the trials was concurrently detected, and 8% of the trials were claimed to have been detected afterwards. In condition two, the concurrent detection rate was close to 50%, but very few participants claimed afterwards to have felt that something was wrong during the experiment (see table 2). Framing correction in terms of individuals instead of trials reveals that a remarkable 69% of all the participants accepted at least one of the two altered statement/rating relations. As hypothesized, the magic trick behind the self-transforming survey made sure virtually no participants noticed the manipulation as such. Instead, detections only took the form of self-corrections (“I must have misread”, “I must have marked the wrong end”, etc.).
_________________________
References:

Hall, Lars, Johansson, Petter, and Thomas Strandberg. 2012. “Lifting the Veil of Morality: Choice Blindness and Attitude Reversals on a Self-Transforming Survey”. . Posted: September 19, 2012. Available online: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0045457

No comments:

Post a Comment