Apparently new blog posts are now dependent on me appearing on Dale’s “Real Seeker” podcast. I joined Dale this morning to interview Dr. Bruce Greyson about his Near-Death Experience (NDE) research and his upcoming book. I thought it went well and appreciated the tone of the show and content that Dr. Greyson shared. There is no doubt that people are having these experiences and are genuinely transformed by them, but this hasn’t yet convinced me that there’s an afterlife – though I’m certainly interested in learning more and will probably read the book when it comes out. I did have one unasked question that I left as a comment on Dale’s post, so hopefully Dr. Greyson will be kind enough to share his thoughts on that.
I’ll also take advantage of this rare blog post to pitch a relatively new genre of podcasts that I have discovered. When one dives into the God debate online, it doesn’t take long to discover all the ridicule, distrust, and derogatory memes coming from both sides of the fence. So it has been refreshing to see what appears to be a rise of content that espouses a more reconciliatory perspective on the loss of faith and which embraces the positive aspects of the religious life. If that sounds appealing to you, I recommend you check out the following podcasts:
The Graceful Atheist – David usually interviews somebody about their deconversion experience, or about the loss of faith in general. He is always very respectful toward believers and maintains a high degree of epistemic humility.
Humanize Me – Bart Campolo – exvangelical and son of Tony Campolo – hosts a podcast about “building great relationships, cultivating wonder, and making things better for other people”.
When Belief Dies – Explores doubt and deconversion from a real-time, personal perspective.
Reenchantment – “Daniel Lev Shkolnik is a Humanist looking for deeper, more meaningful ways to live as an atheist. Each week, he dives into ancient wisdom traditions and modern psychology to find fresh ways of making sense of our place in the universe.”
As much as I would like to publish content here more frequently I don’t see that ramping up any time soon. There is a lot of competition for my time in the foreseeable future, but I’ll try to put up something every once in a while to keep things from getting too stale. Let me know what you thought of the NDE show, and if you have any other suggestions that would fit into the podcast genre noted above.
A while back I wrote a post titled “What is a moral claim” that did not do a good job of getting at the heart of the topic I was actually aiming to address. So I wanted to recalibrate and go beyond asking “what is a moral claim” by offering an answer. That has turned into a rather thoroughgoing presentation of what I now consider to be the moral ontology which is most likely true. Sorry for the length, but I hope its worth the effort.
First, some moral epistemology
I am of the opinion that epistemology should inform ontology (and vice versa). In other words, understanding how it is that we know about something should play a role in defining what we think that something is. Likewise, our understanding of what something is should play a role in defining how it is that we know about it (I covered this more generally in My Ontology – Part 1). I have found that the discussion of morality, particularly in the God debate, often focuses on moral ontology – we like to talk about what morality is without giving too much thought to the epistemology. By asking “What is a moral claim?” in that post last year I was aiming to explore how moral epistemology might inform our moral ontology – contra William Lane Craig, who suggests we should just posit our desired moral ontology and then define our epistemology as a follow-on.
My assertion in that original post was that we can recognize moral claims, and distinguish them from other claims, and that this tells us something about the nature of morality. As was noted by several commenters, this supports nothing more than the notion that morality is at minimum a distinct mental concept. However, I was aiming for something more…
The moral referent
In one of the comments on that original post Dave compared morality to beauty, to which I replied by noting that:
“This is the question of the referent. For beauty, we can generally link the shared concept to ‘the way we feel about certain sensory perceptions’, like sunsets, music, etc…. There is a class of experiences which trigger a similar response in us and so we call those things beautiful.”
This gets to the heart of the matter. As with beauty, there must be some referent which shapes the concept of morality and, as with beauty, it appears that the best we can do is to introspectively trace this to a particular feeling. Just as the concept of “tree” is informed purely by the phenomenal experience of trees (and not through some special metaphysical access to the abstract ideal tree) the concept of beauty is informed by the phenomenal experience of conditions which trigger a particular feeling. Isn’t it most reasonable – perhaps even obvious – that morality is no different?
But there are trees out there in the real world which are separate from our phenomenal experience of them. What is the corresponding reality which feeds into the concept of morality?
When I presented my ontology, I identified universals as mental concepts which are constructed as generalizations of our experience of particulars. The particulars which inform a universal need not be mind-independent, objective entities. Despite the connotations of our language (e.g., “that’s a beautiful sunset”), most of us are not inclined to actually assign beauty as an intrinsic property of the object of our perception, but we rather accept it as a subjective component of our experience; beauty is in the eye of the beholder. Likewise, we’re all familiar with the concept of sadness, but not because it exists ‘out there’ in some sense, but because we are all human and have been able to relate a similar internal state to a common idea which we can communicate. My proposal is that morality is like beauty and sadness. Morality is informed by my phenomenal experience of the feelings and intuitions which arise under certain circumstances.
I take it that the view I have presented for sadness and beauty is fairly uncontroversial, but for some reason morality is a different beast. We struggle against the prospect that the subjective experience of the feelings and intuitions which have informed our conception of morality might be wholly subjective – it’s uncomfortable to suppose that there isn’t an objective reality against which we can hold others accountable and point to and say “No! You’re wrong!” How do we account for this relatively unique property of the moral experience?
The social theory of moral origins
I have been hesitant to adopt the standard naturalist explanation for the origin of morality as an evolutionary product of our social heritage. Regardless, I have since come to accept that the evolutionary development of a moral faculty driven by social selection pressures is quite plausible. In the following sections I attempt to summarize the key evidences and reasoning behind this conclusion.
Prosociality in non-human primates
If morality is an evolutionary product then there should be traces of it in other species and, in fact, morally relevant sociality is a characteristic of our closest evolutionary relatives (and beyond). This is perhaps best described by just about anything that Frans de Waal has published or, more immediately, his TED talk (below) offers a quick and accessible overview:
Social factors strongly influence our morality
If a social heritage was a key element in the development of our moral intuitions then we would expect to see that social forces have a continued impact on the expression of that morality. This appears to be the case:
Social Awareness: A multitude of studies have demonstrated that even subtle awareness of “watchers” impacts our moral behavior. This may reflect a biological predisposition, but when we allow that our moral sense is in part a development that arises through our life experience, the social dimension of that development also corresponds nicely with this data point.
Social Compliance: Setting aside survival instincts, ‘peer pressure’ is perhaps the most capable mechanism for getting us to act in opposition to our moral sense. The Milgram Experiment, the Stanford Prison Experiment, Nazi Germany and, more recently, Derren Brown’s “Push” program serve as some of the more extreme negative examples. However, this applies equally in reverse, where our tendency to realize an arduous moral good is substantially bolstered by encouragement from peers and anticipation of “other-praising” emotions.
Social Comprehension: Our moral intuitions tend to calibrate moral culpability in accordance with the moral agent’s capacities and intentions. This feature depends on an interpersonal judgment built on a theory of mind, such as would be inherent in a socially developed morality where other agents inform that development.
In the end, it is clear that the social environment is a primary factor in our moral behavior even when the social consequences of our behavior lie well beyond our perception. This is consistent with the theory that social pressures have guided the development of our moral sense.
The rider and the elephant
The long-standing traditions of moral philosophy and ethics infer that moral judgment is primarily a rational endeavor, but this appears to be a flawed conclusion. Jonathan Haidt has famously compared our moral sense to a rider on an elephant – the rider being our reasoning process and elephant being our emotionally driven intuitions. There is an extensive body of constantly growing literature on this topic, so for a deeper dive on the role of emotion in morality I will simply refer to the writings of Joshua Greene and Jesse Prinz in addition to those of Haidt.
Regardless, the proposition that our moral sense is predominantly emotional only lends support to the social theory of moral origins when we consider empathy and the explanations on offer for the causal link between morality and emotions. Claus Lamm is one of the more prolific researchers of empathy and is a cautious voice at a time when many are hailing mirror neurons and empathy as the underpinnings of our moral intuitions. Despite this caution he affirms that “there is compelling evidence that similar neural structures are activated when empathizing with someone and when directly experiencing the emotion one is empathizing with” (here) and that “There is some support for the above-mentioned role of empathy in morality, although the direct link between empathy and morality remains rather unclear and requires further investigation” (here).
I hope to heed Lamm’s concerns but I also cannot help but step back to view the big picture and see a tidy set of links wherein our moral intuitions are largely dictated by an emotional elephant whose course can be directed by the neurological capacity to take on the perspective of others – a definitively social faculty. The cohesive picture this paints is compelling and when one considers the implications for moral origins, the social theory seems a natural fit.
The last piece of evidence I wish to present for the social theory of moral origins is the very concern which instigated this discussion – the apparently innate drive toward moral agreement. The desire to hold others and ourselves accountable to a particular moral standard has led many to conclude that morality itself is objective (in fact, this is the only non-pragmatic reason I am aware of for the claim of objectivity) but this phenomenon is also explained if our moral sense was developed through social pressures. To say that selection occurred through social pressures is to imply that there is a social dynamic to the evolutionary pathway. This, in turn, requires that there be some sort of reproductive advantage to the selected pro-social tendencies. However, a lone altruist among a band of free-riders is unlikely to realize any advantage. The advantages which arise from prosocial behavior are then also dependent on reciprocity and cooperation. This means that the development of prosocial behavior is most readily accomplished in coordination with the development of proclivities which favor agreement and reject disagreement with respect to those behaviors. The end result is not only a tendency toward prosocial behavior, but a tendency toward favoring agreement on those behaviors.
Some will object here and suggest that our intuitions regarding the objectivity of morality are more like the intuitions we have regarding the veracity of a proposition (e.g., I am sitting on a chair) than they are like a drive toward agreement with others. I’m not sure this is a proper assessment, but I do agree that on the spectrum of intuitions about an entity’s objectivity, our moral intuitions are generally weighted closer toward the ‘objective’ end compared to more broadly subjective claims like beauty, ice cream flavors, etc… This is perhaps most evident in the language we tend to employ in moral discourse, where objectivity is often inferred (though not always – and this inference is certainly also frequently employed in other domains that are generally regarded to be subjective). That said, I’ll offer two thoughts in response:
As noted above, morality is deeply entangled with emotion. The majority of other subjectively informed claims do not carry the same emotional weight, and this is a significant component of the perceived difference and the drive toward absolutes. That is, the strength of the underlying emotions compels us toward an unwavering perspective. There may even be some degree of a subconscious post-hoc rationalization informing an intuition of moral objectivity. The emotional elephant leads the way and the rider can only make sense of the world by rationalizing the course it’s taking as if that is simply reflecting the objective facts about the world. Neuropsychology is replete with examples of how our cognition engages in this kind of post-hoc rationalization and confabulation.
Though speculative, it is not unreasonable to suggest that the evolution of our moral sense may have incorporated the same faculties which bear on our sense of objective veracity if this improves the effectiveness of morality as a motivating factor. Despite the protests of anti-realists, the data does seem to indicate that moral realism is more conducive to moral compliance than is anti-realism (see one, two, three). This makes intuitive sense – if we think that our moral judgments do not have any subjective wiggle room and we can thus be held objectively accountable to those judgments, then we are more motivated to align our behavior with those judgments. So if our moral sense evolved to incorporate some of the same cognitive machinery that helps us judge the veracity of non-moral propositions then the moral sense would be more effective in eliciting the advantage of moral behavior. The net result would be the subjective perception, to some degree, that our moral judgments are in fact objective. Subjective preferences like beauty wouldn’t carry the same selective advantage and so wouldn’t bear the same character in this regard.
Social origins objection #1: Widespread non-social moral intuitions
So what about those pervasive moral claims which are devoid of social impact? For example, why have so many cultures moralized purity and why has disgust been shown to influence our moral judgments? How does the social theory of moral origins explain this?
The first point to make on this topic is to note that whereas some moral claims are devoid of a direct social impact, they are typically not insulated from social feedback. In particular, the anticipation of shame is a significant factor in motivating against non-social behaviors which have been moralized.
Second, there may very well be an indirect social impact. In the case where purity or disgust is linked to the non-social moralized behavior we can note that an inadequate avoidance of pathogens is not only detrimental to the individual but also to that person’s social circle. The germ theory of disease converts a seemingly non-social disgust instinct into a socially relevant behavior, such that social judgment that accompanies moralization may in fact be efficacious.
Lastly, if our moral sense is largely an adaptive product of evolution then the evolutionary path is predicated on the behavior which corresponds with our moral sense (because the feelings themselves offer no selective advantage apart from behavior). Evolution favors efficiency, so it is likely that the neurological systems which serve to guide our behavior in general (through the feelings which motivate and inhibit) are also involved in our moral sense, such that there is some level of commonality in our interoception of the morally relevant motivations and the motivations which influence other aspects of our well-being. This would imply that there isn’t a ‘moral’ category that cleanly distinguishes moral interoception from other interoception. So even if the majority of the intuitions that we have categorized as ‘moral’ carry a social relation, it is reasonable that other, non-social intuitions may seem to fit that category as well.
Social origins objection #2: Culturally constructed morality
Many anthropologists have argued that morality is memetic, not genetic. That is, they suggest that the moral sense is learned and acquired from one’s environment – specifically, one’s cultural influences. I think there’s some truth to this perspective, but I don’t see that it is mutually exclusive with an evolutionary explanation. It seems quite evident that cultural influences serve to inform our moral intuitions but this alone does not explain the aforementioned ‘moral referent’, that distinct component of our interoception. I do not doubt that one’s moral compass is informed by their environment but it’s the compass itself that is primarily of interest here, and culture does not explain it’s existence in the first place.
This is an important concept when it comes to the discussion of moral progress. If morality is defined to be nothing more than a cultural construction then the realist is correct to suggest that there is no such thing as progress. However, if there is a biological basis for the moral sense then progress can be assessed relative to that faculty. Even if there is variability across persons, there is still a common origin that fosters some level of agreement at a fundamental level. Here anthropology re-enters the picture to support the notion of an innate moral nature, as elucidated in the work of Donald Brown and Richard Shweder. This is not to suppose that we can necessarily determine right and wrong answers to individual moral claims by reference to that nature alone, but rather to say that there is a general bent which our species shares.
What is a moral claim?
This was the question I asked long ago and hoped to also answer here. In case the preceding discussion has not made it clear, I am arguing that morality is the concept which refers to a particular set of feelings and intuitions that arise as a result of predispositions which developed in our species through social pressures and are shaped and influenced by our development, experiences and reasoning. As such, a moral claim is simply a claim which implicitly or explicitly refers to those feelings and intuitions (or their absence) as if they were properties of an action, person, object or event. This perspective entails a particular moral ontology, namely …
So it seems that in adopting this view I have officially joined the moral relativist camp. I am quite comfortable with the epistemology and ontology this entails (as outlined above) but these are not informing my conclusion in isolation. Other considerations include:
Dependence on biology: Though I have already touched on this to some degree, there is much more that could be said. Neuroscience has increasingly demonstrated how variations in our neurology bear on our morally relevant judgements and behavior, as most famously illustrated by the classic cases of Phineas Gage and Charles Whitman (also see Patricia Churchland’s ‘Braintrust’ and, more briefly, David Eagleman’s article in the Atlantic for overviews). While this state of affairs is not logically inconsistent with moral realism, it is more parsimonious with a relativistic ontology.
Moral diversity: In accordance with the biological dependence noted above, we observe that these variations manifest themselves in widespread moral disagreement. Though it is true that there are many claims where moral agreement abounds, and even some fundamentals that are nearly universal, it is also the case that moral disagreement is more rampant than is found in objectively arbitrated claims. That is, we are more likely to disagree about a moral claim than to disagree about a claim that is based on empirical observations. As before, though this condition is not incompatible with moral realism, it highlights a divergence from the ontologies we posit for most of the entities that we identify as objective and so it is in that sense unexpected. Conversely, such diversity is entirely expected under a relativistic framework.
Epistemology and ontology aside, relativistic normative ethics is admittedly troubling. Not because I am forced to subscribe to Dostoyevsky’s “all things are permitted” – the shallow characterization of relativism which completely abandons both normative ethics and moral discourse and is often parroted by theistic apologists. No, the trouble is that normative ethics are inherently social and even when we employ frameworks which seek to satisfy our moral intuitions about fairness and reciprocity, such as social contract theory, we are unable to realize the ideal. The application of a normative ethic at the social level will require some level of subjugation wherever there is genuine moral disagreement. Perhaps this is simply an inescapable tension which is intrinsic to our moral sense; a consequence of the unavoidable competition between the benefits of both freedom and cooperation. Just as the realists must concede the inability to objectively arbitrate the moral truths to which they subscribe, perhaps the relativist must concede that the implementation of normative ethics cannot escape the morally distasteful act of imposition. Thrasymachus made a similar observation 2500 years ago and as far as I can tell we’re no closer to a solution. It’s worth continued discussion, but I have grown increasingly skeptical that it will ever be resolved.
Moral relativism also does not mean that we surrender our ambitions of moral progress. There is a human nature and even pervasive moral intuitions are sometimes inconsistent, or in conflict with our nature, or uninformed or misinformed by errant beliefs. Moral discourse and experience can elicit change so that our moral judgments are more accurately aligned with reality and with our inherent nature. Relativism does not mean that we accept all moral claims as equally true. It does not entail pacifism, complacency or anarchy. It does not ask us to ignore our sense of indignation and stand idly by. No, none of these strawmen are true if you’re willing to scrutinize your moral judgments. Can a moral relativist tell somebody else that their behavior is wrong? Yes, but be ready to expose the inconsistencies and faults in their reasoning. Can a moral relativist promote or discourage social policy? Yes, but be ready to use evidence to justify your position, preferably with reference to fulfillment of human nature. Can a moral relativist fight back or intervene when they perceive wrong? Yes, of course. I’m not sure I understand why I even feel the need to answer that question but the rhetoric around this issue suggests that I do.
The big objections
Which leads to the big question. It was going to happen eventually, so I might as well put Godwin’s law into effect now: “Relativism, huh? So the Nazis weren’t wrong?” Under relativism I am able to say that the Nazis were wrong according to my intuitions and those of everybody I know, but I’m not making an absolute claim. Notice that the framing of the objection begs the question for moral realism, so it’s a bit of a trap that tries to force a response within the bounds of that assumption, pushing one to grapple with the intuition toward objective morality that was the focus of the prior discussion. That said, it seems to me that it’s also very reasonable to argue against the legitimacy of the Nazi program on the grounds of errant beliefs and an inconsistency with the moral nature of those who carried out the program. Furthermore, as noted above, there is nothing about relativism which entails inaction or ambivalence toward those with whom we disagree.
“and there’s nothing wrong with torturing babies for fun?” Again, I am perfectly able to say that this is wrong according to my intuitions and those of everybody I know, but I’m not making an absolute claim. However, this is a bit more difficult because there isn’t any reason in this case to also object on the grounds of errant beliefs or conflicts in human nature. If an individual were to be biologically disposed so that they did not find this behavior morally abhorrent then I have nothing but disagreement to offer (though I would argue that in a practical sense, the realist is in the same position). As before, this does not entail inaction or ambivalence.
The last word
In the end, moral relativism is neither pacifism nor a blank check. It requires introspection, reasoning, evidence and discourse. We sometimes act in ways which are in opposition to our true values and intentions; we experience regret. Relativism suggests that you take a hard look and try to understand those values and intentions – to consider whether they actually align with your nature and to examine how they are best achieved – and then to direct your life accordingly. You will still mess up, but at least you are trying and that diligence can eventually shift the underlying feelings and intuitions into closer alignment with reason and, hopefully, reality.
“Ha! Caught you. That’s self-defeating! You can’t say that moral relativism requires scrutiny of our moral judgments! That’s an absolute moral claim!”
I have indeed made a normative assumption, but that assumption was not moral. It was an assumption about the reliability of cause and effect. So allow me to rephrase: moral relativism is most rational and most able to accurately satisfy our morally relevant desires when coupled with introspection, reasoning, evidence and discourse.
I embarked on this truth-seeking pilgrimage four years ago and in doing so devoted myself to following the evidence wherever it leads. Accordingly, I have refrained from aligning with any particular moral theory for most of that time. It is an incredibly complex, confounding, divisive and emotionally draining topic. Evidence is difficult to gather and interpretations abound. So while I have finally taken the step of adopting a moral ontology, it is perhaps more tentative and provisional than any other position that I have staked, even as I recognize that this hesitancy is almost entirely emotionally motivated. Regardless, if you disagree with the conclusion then you are welcome to try and change my mind. That’s why I’m here.
Sorry about all the $2 words in the title. Even if that didn’t make sense, I hope the rest of the post still does.
A couple years ago I wrote a post titled “Reconciling the Crucified Messiah“, where I summarized a naturalist perspective on the origin and ascent of a religious sect that was centered around a crucified leader; which is admittedly a bizarre turn of events. That post briefly discussed the development of Christian atonement theology as a consequence of the crucifixion and how that reconciliation was critical to transforming a seemingly insurmountable setback into a hallmark of the faith. But this new atonement theology did not entail that the salvation afforded by the atonement is only available to those who believe, and so here I would like to consider another curious yet synergistic development of the Christian movement: the introduction of doxastic soteriology (doxastic = “related to belief” and soteriology = “doctrine of salvation”, so a doxastic soteriology is a doctrine in which salvation is in some sense dependent on belief). I propose that this was largely driven by eschatological concerns (i.e., related to the end of the world \ final judgment).
Despite my Christian bubble having been popped almost four years ago, it only recently occurred to me that belief in Jesus (as messiah, lord, savior, etc…) might not have been viewed as a requirement for salvation in the earliest days of the movement. A doxastic soteriology certainly doesn’t appear to have been part of the mainstream Judaism to which Christianity owes its roots and, from a naturalistic perspective, it seems highly unlikely that Jesus himself taught that people had to believe in him to be saved, despite what the Gospel of John portrays.
So what happened?
There are several points of contact which show that the Nazarenes (early Christians) shared some influences with the Qumran community (whether directly or indirectly). Among these is an eschatological perspective in which the demarcation between the elect and the damned fell not along ethnic boundaries, as was implied by traditional Judaic eschatology, but rather around ideological boundaries. To the Qumran community, the elect were those who aligned themselves with the community lifestyle and ideology. It appears that this perspective was in part driven by a perception of religio-political corruption (e.g., the “wicked priest”) and the wish to exclude undesirable religious figures from Yahweh’s kingdom – a theme that is mirrored by the gospel narratives and was quite possibly an element of Jesus’ teaching. A similar shift was also occurring throughout greater Judaism in the second temple period. Ever since the Babylonian exile, the Jews had been trying to figure out how to deal with the diaspora and cultural intermingling. The rise of decentralized worship in synagogues and the need to accommodate cross-cultural relationships spurred a decline in the traditional ethnocentric eschatology that the earlier prophets sought as they lamented the conquests of Israel. As a whole, the Judaic quest for future justice was gradually transitioning from an ethnic foundation toward ideological foundations.
Combining this with the widely accepted understanding of Jesus as an eschatological prophet, we can imagine that Jesus and his followers considered themselves to be bearers of the gospel, where the good news was not that Jesus was going to die for your sins, but rather that the end of days was imminent – perhaps even facilitated by Jesus’ prophetic ministry – and that you too could be part of the eternal kingdom if you repent and adopt the lifestyle and ideology of their sect. This message may have even neglected ethnic boundaries. From this we can see that the seeds of a doxastic soteriology were present in Jesus’ message, but were only germinating. After the crucifixion, more changes came into play.
First, we have the Nazarenes continuing to proclaim their eschatological message despite their messiah having been killed and, furthermore, cursed by Yahweh as a consequence of having been hung and left exposed on a tree (Deuteronomy 21:22-23). Though the Nazarenes appear to have wanted to remain Torah observant, their message became increasingly disagreeable and divisive as they continued to exercise midrashic liberty in defense of Jesus as messiah. As a result, the gulf between their sect and mainstream Judaism grew and they were, as a whole, steadily pushed and pulled away from participation in Jewish communities.
Then, as we consider the growing chasm between the Nazarene sect and mainstream Judaism we can turn back to the Qumran example to see what happens – namely, an eschatological evolution in which the opposing party is excluded from salvation (that is, participation in the eternal kingdom). As a close relative of Judaism, the early Christians had very few distinctions that could be used to draw that eschatological line in the sand. However, above all else, there was one thing that separated them from mainstream Judaism: belief that Jesus was the messiah. And so Christianity’s doxastic soteriology was born. As that chasm continued to grow so also did the prominence of belief as a central dogma of the Christian soteriology, reinforced by the synergistic coupling of a new atonement theology that was dependent on the object of that belief and independent of the temple sacrifices. Going one step further, the adoption of this eschatologically motivated doxastic soteriology also served to emphasize the significance of Jesus and so was perhaps instrumental in his eventual elevation as coequal with God.
Does belief in God improve cooperation? In case you haven’t seen it yet, a new study published today in Nature says that the answer is yes (a better summary of the study is available at ScienceNews). The authors go on to suggest that “beliefs in moralistic, punitive and knowing gods increase impartial behaviour towards distant co-religionists, and therefore can contribute to the expansion of prosociality.” In other words, the apologists have been right all along – we can’t be good without God.
Another bolus of research indicates that we are innately predisposed to God belief. Where the theist claims this as evidence of God’s fingerprint on our subconscious, the naturalist has responded with theories of agency detection. The research above has nothing to do with detection of agents but may still be relevant to the question of an innate God belief. While the benefit of agency detection certainly makes sense in the “you’re better off running away even if it’s only a tiger 2% of the time” sort of way, there’s still a leap to the theistic conception of an omni-God. Could it be that in the millenia which have ushered in civilization, a sort of natural selection acting on cooperation has bolstered and tweaked that innate predisposition into one which favors an all-seeing, omnipresent God who encourages our cooperation under every circumstance?
Chaos in the absence of belief?
The nones are growing at a rapid clip. Do these findings mean that the rise of an unbelieving society will degenerate into moral chaos? It’s obvious that Pat Robertson and much of conservative Christianity thinks that is the case, but perhaps we can flip the question on its head and ask whether the rise of the nones has in part been facilitated by the replacement of God with something else. Consider the far reaching scope of surveillance, the ubiquiti of mobile audio and video capture and the advances in forensics over the years. In the absence of an immediate deterrent, the odds that we will still be held accountable for our actions has increased dramatically over the last few decades. I wager that this has not escaped our attention. But if we are to take Steven Pinker at his word, we have also become more prosocial over time. So unbelief is on the rise concurrent with a rise in prosociality. The research cited above would predict the opposite result – unless some alternative is taking the place of God. Despite all the concern about our loss of privacy, perhaps Big Brother is just what we need.
Last month a commenter suggested that “I would be interested to see you research and post on ‘How science addresses the subjective, in relation to consciousness and freewill'”, to which I responded that I might write up a summary of the ways this is addressed in the book I was reading, Stanislas Dehaene’s “Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts”. Furthermore, the comment offered a particular direction to consider, namely that science can “collate and analyse subjective reports and hope something comes out of this – e.g. by correlating objective measurement with the subjective reports. … The problem with this is that these results are generally not accorded the same scientific status.” Dehaene actually spills a considerable amount of ink in the opening chapters addressing this concern. For example:
“This research strategy was simple enough, yet it relied on a controversial step, one that I personally view as the third key ingredient to the new science of consciousness: taking subjective reports seriously. … The participant’s introspection was crucial: it defined the very phenomenon that we aimed to study.” (pg 11)
“The correct perspective is to think of subjective reports as raw data. A person who claims to have had an out-of-body experience genuinely feels dragged to the ceiling, and we will have no science of consciousness unless we seriously address why such feelings occur. In fact, the new science of consciousness makes an enormous use of purely subjective phenomena, such as visual illusions, misperceived pictures, psychiatric delusions, and other figments of the imagination. Only these events allow us to distinguish objective physical stimulation from subjective perception, and therefore to search for brain correlates of the latter rather than the former.” (pg 12)
“All this evidence points to an important conclusion, the third key ingredient in our budding science of consciousness: subjective reports can and should be trusted. … introspection is a respectable source of information. Not only does it provide valuable data, which can often be confirmed objectively, by behavioral or brain-imaging measures, it also defines the very essence of what a science of consciousness is about.” (pg 42)
Those quotes refer to three key ingredients which go beyond the objective data about brain activity that we can gather through fMRI, EEG and the like. Dehaene identifies these ingredients as conscious access, manipulation of conscious perception and, as noted, careful recording of introspective reports. He then goes on to further define each of these.
Conscious access is defined as the awareness of specific information – it’s the foundational definition of consciousness that underpins more elaborate attributions, like self-awareness. As is elucidated in the book, our brains actually consume massive amounts of perceptual data. Much of what is received by our senses and processed in our brain eludes our conscious awareness. Conscious access is that sliver of data which enters our stream of thought from amongst the mountain of perceptions which bombard us from without and arise from within.
Our conscious access is reportable. As I type this, you are receiving a report of my conscious access. We cannot report on that which we are unaware of, so it is by definition that reports are only informative with regard to the content of our conscious access. Experiments can build upon this by asking participants to focus on a particular element of their perceptual space that has been carefully crafted by the experimenters. This manipulation of conscious perception is the experimental variable that allows the researchers to segregate the data into that which correlates with consciousness and that which does not. Dehaene outlines several primary manipulations – binocular rivalry, attentional blink, subliminal stimuli – and references several others throughout the course of the book. Each of these present an opportunity to separate conscious processing from unconscious processing and so look for the signatures of consciousness.
Dehaene then goes on to highlight the massive amount of work that our brains are doing subconsciously and how this surreptitiously influences our conscious access. Research in this domain paints a picture of the inverse side of consciousness and offered a baseline against which consciousness can be compared. After taking a side trip into discussions about the viability of the evolutionary origins of consciousness as a tool for organizing and prioritizing the competing interests in our subconscious processes, we are introduced to the findings that this recipe has thus far wrought.
The toolkit described above has been extensively deployed in the lab and the cumulative results led Dehaene to identify four reliable signatures of consciousness. They are:
 “Although a subliminal stimulus can propagate deeply into the cortex, this brain activity is strongly amplified when the threshold for awareness is crossed. It then invades many additional regions, leading to a sudden ignition of parietal and prefrontal circuits” (Fig 16, pg 119)
 “In the EEG, conscious access appears as a late slow wave called the P3 wave. … For conscious words only, the wave of activity is amplified and flows into the prefrontal cortex and many other associative regions, then back to visual areas. This global ignition causes a large positive voltage on the top of the head – the P3 wave.” (Fig 18, pg 123)
 “A long burst of high-frequency activity accompanies the conscious perception of a flashed picture … When viewers failed to see the picture, only a brief burst of high-frequency activity traversed the ventral visual cortex. … Conscious perception was characterized by a lasting burst of high-frequency electrical activity, which indicates a strong activation of local neuronal circuits.” (Fig 20, pg 136)
 “The synchronization of many distant brain regions [form] a global web … during conscious word perception, causal relations show a massive bidirectional increase between distant cortical regions, particularly with the frontal lobe. Only a modest and local synchronization occurs when the participants fail to perceive the face or word.” (Fig 21, pg 138)
The common attribute which ties these signatures together is that they all represent prolific activity across large areas of the brain. In contrast to Descartes’ pineal soul-suite, the evidence points to consciousness as a phenomenon that is spread throughout the brain when a massive avalanche of distributed activity is launched. This excitation is what Dehaene calls “global ignition”. After having presented all of the correlative data Dehaene anticipates a common objection – correlation does not equal causation – and so he offers evidences to support the proposal that brain activity is more than just a side-effect of the ghost in the machine and that there are reasons to believe we are glimpsing consciousness itself.
“Let us play devil’s advocate again … Might [global ignition] bear no specific relation to the details of our conscious thoughts? Might it just be a surge of global excitation, unrelated to the actual contents of subjective experience? … Calling such a brain event the medium of consciousness would be like confusing the thump of the Sunday newspaper on our doorstep with the actual text that conveys the news.” (pg 142-143)
The first stop for the counter against this objection comes at the Centre for Systems Neuroscience at the University of Leicester in the UK, where Rodrigo Quian Quiroga enjoys probing individual neurons and finding ways to incorporate pop culture icons into his experiments. He has spent the last decade examining the relationship between conscious access and discrete patterns of neural firing at the level of individual neurons. The short story is that through a novel technique pioneered by Itzhak Fried, we have been able to take advantage of the surgeries performed on epilepsy patients to implant fine electrodes that record from individual neurons. When these are monitored during experiments there are very specific relationships found between perceptual and recollected concepts and individual neurons. Those experiments have not only identified a link between concepts and individual neurons, but the same tools used to investigate consciousness have been utilized to show that some neurons are only linked to conscious perception of stimulus – in effect, the neuron can be said to be a part of a conscious thought. These findings have been documented across many publications, but a few of the key overview papers are “Concept cells: The building blocks of declarative memory functions” and “Brain Cells for Grandmother“. Furthermore, similar findings led to the awarding of the 2014 Nobel Prize in Physiology or Medicine for the discovery of place cells; individual neurons which correlate with our location in space. These were first discovered in rats and then subsequently also identified in humans. The extrapolations we can draw from the discovery of an association between individual cells and conscious perception are potentially monumental. In particular, it does not seem inconceivable that perhaps some day we may be able to translate the philosopher’s qualia as a pattern in the brain.
Transcranial magnetic stimulation in 1911 (C.E. Magnusson and H.C. Stevens)
While fascinating, the added specificity of the single neuron experiments has not yet established causation. It could be that those individual neurons are simply assigned dedicated roles as the bridge between body and particular concepts of the mind. Perhaps in those experimental observations we are simply bystanders watching as the train of thought passes by. That is not impossible, but there’s more to examine. The next stop starts with a bit of time travel back to the early 20th century, when several parties began toying with transcranial magnetic stimulation (TMS) and reporting various sensory anomalies in conjunction with the activation of the coils. Vast improvements in the equipment have allowed these experiments to continue today with sharp precision that enables experimenters to focus the stimulus to specific regions of the brain. In doing so, they have been able to trigger domain specific sensory illusion – light when there is none to be found, motion while sitting still and color in a monochrome scene.
Perhaps more significant, however, is not the creation of sensory perception through TMS, but rather the disruption of consciousness itself through the same mechanism. Magnetic pulses targeted toward the long-distance networks that facilitate global ignition have been shown to eradicate a conscious perception that would have otherwise obtained. Even more relevant to the question of the interplay between the subjective and the objective is a study in which the prefrontal lobes were overwhelmed with pulses, leaving an effect which lasted up to 20 minutes. During this time, the subjects were asked to perform simple tasks of judging shapes that were presented to them. Objectively, their accuracy was effectively equivalent to their performance prior to the stimulation. Subjectively, however, they reported significant doubt in their answers. Objectively they were just as capable but their conscious awareness of their judgement had been impaired.
Before closing this section I must acknowledge that for the resolute dualist, we still haven’t fully addressed the objection. Maybe the TMS is acting in the place of our sensory input, stimulating or disrupting those neural mind-bridges in such a way that the mind thinks it is receiving or missing sensory data. OK, then let’s go beyond the content of the book and take a look at some additional research. If we say that the mind is distinct from matter then theoretically our memories are also made of mind stuff. However, starting about 70 years ago with Wilder Penfield experiments have been shown to trigger memory recall through direct electrode stimulation of specific brain regions. Whereas the dualist could argue that this stimulation is no different than the recall we experience when a familiar sight or sound is encountered through sensory input, the distinction becomes apparent when stimulation is used to disrupt conscious memory recall. For example, by acting directly on brain regions associated with verbal memory, electrical stimulation can directly impair recall of names for familiar objects and this phenomenon is often used to locate brain function through the process of cortical stimulation mapping. It is not that the person’s sensory perception of the object is disrupted but rather that their recall of the memory content which associates words with the object has been impaired. I find it difficult to understand how this result fits into a dualist framework.
In total, there is a large body of evidence that the content of our thought-life is causally connected to our neurology. We have opened an objective window onto the world of the subjective and on to consciousness itself. Massive projects are underway and, though we are still far from grasping the means of translation between the subjective and the objective, the future appears to be one in which mind and matter are proven to be one and the same.
The Diving Bell and the Butterfly
Dehaene outlines his theory of consciousness in the fifth chapter but it’s really just a review of the ideas that he has already outlined in the previous chapters. His theory, in short, is that consciousness is roughly equivalent to the concept of “global ignition” introduced above, with the added dimension of feedback loops containing the information which persists to define our subjective experience. This is what he calls the “global neuronal workspace” theory. Information is shared throughout the brain as an evolutionary adaptation which allows us to utilize it in various ways and prioritize our attention. Within this discussion several neural computer simulations are presented which demonstrate a similar type of threshold ignition and feedback, which is central to the theory, even though that particular behavior was not deliberately designed into the model. Then, having built his theory of consciousness upon the key signatures identified above, Dehaene sets out to find a way to test it. It is one thing to find correlates of consciousness, it is quite another to use that information to build a reliable “consciousness-o-meter”.
Jean Dominique Bauby and his secretary
The proving ground for this theory is found in one of the most difficult medical scenarios; that of the vegetative patient. We are introduced to the spectrum of states which manifest in response to a severe insult to the brain: from brain death, to a vegetative state, to minimal consciousness and locked-in syndrome. That last of these occurs when a fully conscious brain is “locked in” to an unresponsive body, as was the case for Jean-Dominique Bauby when he authored The Diving Bell and the Butterfly with just one blinking eye. The difficulty in these cases is that with only the subject’s external, objective behavior available to the clinician, the ability to determine whether there still any internal conscious life and hope for recovery is radically impaired. What’s worse, the manipulative tools which were used to detect the signatures of consciousness in the lab are also taken out of contention due to the inability to rely upon the subject’s ability to focus their sensory perception and report on their conscious access. An alternative technique relies on the observation that we are wired to detect novelty, such that changes in our surroundings trigger a response in the brain. This trigger, however, fires even if the novelty never enters our conscious awareness. That, in turn, means that the novelty itself is not sufficient for establishing the baseline that discriminates between the unconscious response and conscious detection of the change. To get around this the research team devised a clever tool called “global auditory novelty”. Relying upon the fact that the sense of hearing is rarely lost in these brain injuries, the subjects were presented with a pattern of four “beeps” following by a “boop”. The “boop” represents the local novelty which triggers the subconscious alert that something has changed, which may or may not enter our consciousness. Our long-term, or “global” conscious perception, however, is a bit more sophisticated. Once this pattern is repeated enough times the “boop” becomes part of the expected sequence even though it triggers the alert in the brain. This causes the “boop” to eventually slip out of our conscious awareness. So, by repeating the pattern several times and then replacing the local deviant “boop” with a global deviant “beep”, the team was able to induce a situation in which the subconscious alert was silent while the conscious detection of a global novelty was ignited.
What was the result? In the initial trial with eight patients, all three of the minimally conscious patients whose EEG’s lit up with the P3 wave in response to the global novelty later regained consciousness. In a subsequent study with 22 vegetative subjects only two yielded a P3 wave and they both became minimally conscious in the following days. While these initial tests were perfect in that they never yielded a false positive, there were still several false negatives. To address this the group compiled their data and ran a statistical analysis to refine the prediction from the EEG waveforms. This refined calculation, which incorporated the full suite of EEG data and the other signatures beyond just the P3 wave, led to an exciting result. Using a data set of over 200 patient they found that in 33% of the cases where the clinical diagnosis was “vegetative state”, the refined analysis yielded an alternative diagnosis of “minimally conscious”. Of these, a full 50% recovered to a clinically obvious conscious state in the next few months, whereas this false negative rate was otherwise only at 20%. Adding these up, we see that the clinical diagnosis was overly pessimistic for 30% of the patients while the EEG signature diagnosis was overly pessimistic for only 13% of the patients. For families struggling with questions about how to manage the care of their loved one as they cling to life, this objective detection of consciousness through physical measurement of brain activity may be the key to maximizing the realization of their hopes.
Dehaene spends the last chapter of the book examining the ways in which the science of consciousness will continue its assault on the mystery of the subjective experience. Here we are presented with data to show that the global workspace theory of consciousness tells us that infants are conscious at birth and that several other animals exhibit the signatures of consciousness. He then turns his attention to the philosophical problems of qualia:
“My opinion is that Chalmers swapped the labels: it is the ‘easy’ problem that is hard, while the hard problem just seems hard because it engages ill-defined intuitions. Once our intuition is educated by cognitive neuroscience and computer simulations, Chalmers’s hard problem will evaporate. The hypothetical concept of qualia, pure mental experience detached from any information-processing role, will be viewed as a peculiar idea of the prescientific era, much like vitalism” (pg 262)
and free will:
“Our brain states are clearly not uncaused and do not escape the laws of physics – nothing does. But our decisions are genuinely free whenever they are based on a conscious deliberation that proceeds autonomously … When this occurs we are correctly speaking of a voluntary decision – even if it is, of course, ultimately caused by our genes, our life history, and the value functions they have inscribed in our neuronal circuits.” (pg 264-265)
While I am not yet willing to express a level of confidence on par with Dehaene regarding his conclusions, I am obliged to say that I agree (and I posted similar thoughts on free will in the post which inspired those introductory comments last month). Even so, neuroscience may never be able to deal an incontrovertible death blow to the dualist paradigm. Like Sagan’s infamous garage dwelling dragon, the mind can always be excused from questioning and made into an extra immaterial layer that mirrors the brain even at the level of individual neurons and synapses. At some point, however, it becomes clear that we are just playing games. When that time comes, if it hasn’t already, we need to acknowledge the data for what it is and the implicit conclusion that we are nothing more than our physical body; that our identity – our conscious self – is found in our brain.
Two years into this journey and I find myself at a place where I can scarcely imagine reaffirming Christianity as the best explanation of reality. Even the most “liberal” flavor of the faith looks difficult to swallow. But there is more to life than knowledge and sometimes the most rational thing we can do is eschew truth. Don’t tell me it’s a sugar pill if it is truly my best shot at feeling better. Just lie to me and give me the damn pill.
Almost two years ago I sat in a pastor’s office with my wife to discuss the revelation that I could no longer honestly call myself a Christian. At some point in the discussion I said that I knew that I could blind myself to all sources of doubt and immerse myself in the Christian world – and then wait. After enough time I would probably return to a genuine faith. I shared a similar sentiment with my wife in an email I that I sent her before that meeting, just a couple of days after revealing my loss of faith to her:
It’s like asking somebody to forget what they’ve seen. We can’t choose to forget. It may happen naturally over time, but we can’t will ourselves to forget. … I could ignore those issues, do everything I can to avoid discovering new ones and pretend that they’re meaningless. Over time, that would probably work and the issues would fade into the background. This is where the choice comes in. I could choose to do that but then I would be living a lie for 5, 10, 20 years, or however long it takes for the issues to fade away. Instead, I’m choosing to face the issues. If Christianity is true, then I think that my journey should lead me to that conclusion.
Amongst the countless hours of reflection over these last two years there have been many occasions where I could identify a practical benefit to the Christian worldview. In an earlier post I acknowledged that there is a strong psychological allure in Christianity, namely in the belief that we are not simply at the mercy of chaos and that, in the end, victory will be ours. It is easy to understand why we would want this to be true. These beliefs, however, can and do extend beyond the conceptual and impact us directly in the here and now. Some would argue that holding unsubstantiated beliefs is in some sense wrong (Clifford’s Principle) – but I disagree. I contend that if holding a belief is clearly the best way to attain a desired outcome then it is completely rational to hold it.
So this is want I want to examine. What benefits does Christianity enable us to realize in this life, and is adherence to the Christian worldview the best way to attain those benefits? In other words, does the cost-benefit analysis favor Christian belief over all other possible mechanisms for leading a fulfilling life? To start, I’ve identified a few benefits and costs to explore. This post is in large part a request for your input on these and for other practical factors that I should consider in subsequent posts.
Benefits (even if the Christian worldview is false)
Stress management (achieved in several different ways)
Better outcomes via the placebo effect
Social fellowship with emphasis on encouragement and support
Reduced death anxiety
Regular reminders to self-evaluate
Sense of having purpose and value which transcends our circumstances
Frequent encouragement to cultivate material contentment and to invest in the lives of others
Diminished sense of loss when loved ones die
Costs (assuming that the Christian worldview is false)
Potentially long or indefinite period of intellectual discomfort until dissonance fades, with strong potential for reemergence later
Misallocation of resources
Improperly or ineffectively acting toward a goal because of a false understanding of influences
Undue pressure to accept potentially disagreeable principles on the basis of authority
Insufficient value placed on earthly life and “temporal things”
Potential for anguish over the fate of “unsaved” loved ones
I crossed out #2 on the costs lists because it would be begging the question. If it turns out that the pragmatic benefits of Christianity outweigh the costs and they are not otherwise attainable then the allocation of resources to the Christian cause should actually be viewed as appropriate. Additionally, I need to point out that I am well aware that many of the benefits listed here are not exclusively found in Christianity. The exploration of alternative mechanisms for realizing those benefits is a crucial element to this series.
If you’re wondering why I haven’t included the afterlife in these lists, see my post on Pascal’s Wager. On the surface, this topic might seem contradictory to the perspective I offered there – namely that we shouldn’t believe something just for the benefit. However, there is a vast difference. Pascal’s Wager is based on a purely speculative outcome obtained via a purely speculative mechanism. Conversely, in this case we can draw upon our experiences, psychology and other research to understand probable outcomes in this life.
This isn’t a tidy, well-planned series. My coverage of these topics will span a long time and will be interspersed between plenty of other posts that I’ve already dreamed up. This isn’t the type of thing where the answers are just sitting out there waiting to be found. There are a lot of factors at play, a lot of psychology to sift through and the end result is enormously subjective. Hopefully your interactions will keep me grounded.
Finally, please do not misinterpret this exercise. I can imagine how this might be psychoanalyzed. I’m not in some dark place looking to reclaim the joy I had when I was a Christian. I don’t know how to compare distinctly unique stages of life, but its possible that I’ve never been happier. Ironically, the motive behind this exercise is very non-Christian: if this life is the only one I have then I should pursue the course which makes the most of it. This journey is about more than collecting facts and discerning the structure of reality. It’s also about navigating life, and I went public with this blog because I knew that my best shot at success was to incorporate a wide variety of insights from others. So please let me know your thoughts on this topic in general, and on the individual benefits and costs of a Christian worldview. Thanks in advance.
“I use freewill to mean we can choose to change the physical sequence of events in our brains. … If we don’t have genuine freewill, then we can’t choose”,
to which I responded with
“Regardless of where one stands on free will, we agree that we engage in something called ‘choosing’. This phenomenon is universal whether we think it is performed by a ghost in the machine or it is just another cog in the chain of prior causes.“
This thread of the discussion carried on a little longer without a mutual understanding and eventually ended with me saying that I would try to explain myself in a new post.
So here we are. I currently suspect that we do not have libertarian free will; that is, I doubt that there is an uncaused part of us which controls the act of choosing. This is not a certainty, but I am compelled by the evidence (and the lack of alternative evidence) that this is probably a correct description of reality. So, now that you have received this revelation, you may climb back in bed and curl up in a ball and wait for your death because you are just a cog in a chain of causes. You are no different than the computing device you are currently using. You are a powerless bag of molecules, a meat puppet dangling by the strings of chance. Upon believing that your choices are byproducts of everything else, you could, paradoxically, immediately succumb to a self-defeating fatalism or you could keep reading and take another path. What will you do? Is that even a meaningful question?
This post does not seek to argue whether or not we actually have libertarian free will. The point of this post is to consider the implications for our sense of freedom if we do not possess uncaused agency.
Wait. How do you explain our experience of choice?
Good question. Even though I have no intention here of making the case for an absence of libertarian free will, it is worth considering whether that situation is even possible. I would like to start by reflecting on some observations which are representative of things that we’ve all experienced at one time or another.
The other day the book I was reading included a comment that “…animals don’t seem to want to party, despite what we see in children’s cartoons like Madagascar.” About 30 minutes after reading that – I’m slightly embarrassed to admit – I found myself with the Katy Perry song “Firework” in my head. Upon recognizing this I was surprised, so I stewed on it a bit. This is not a song that I encounter frequently in my listening habits. When I stopped to think about this, a faint scene began to play in my mind. It was an animation of zoo animals performing circus acts. You see, about a week earlier, I spent a couple hours watching Madagascar 3 with my sons. Near the end of the movie, the main characters engage in an elaborate circus performance set to the music of – you guessed it – “Firework”. Unbeknownst to me, the reference to the Madagascar movie in the book I was reading had set in motion a network of activity, drawing on recent experience, that led to the production of a particular song in my head.
When I was a kid my brother would play the “made you flinch” game. It may be a stretch to call it a game, but the rules are basically this: at any time, you can go up to your sibling and act like you’re going to hit them and then stop short. If they react in a defensive way then you have license to actually hit them. Twice. By definition, a flinch is involuntary. After enough bruises you learn to remain vigilant and can suspend your reaction, but eventually you will be caught off-guard again. Control of the flinch is subject to awareness.
As a final example, we’re all well aware that repetition can train us to do things effortlessly and thoughtlessly even though these things required considerable conscious attention during the initial training. This includes actions like reading, riding a bike, driving a car, using a mouse, etc… Even simple math eventually becomes automatic. These well-trained processes seem to lie on the borderlands between the intentional and the unintentional, lying just below the level of consciousness and waffling in and out of our awareness. We sometimes catch ourselves unaware that we had done something, or are doing something.
As these examples show, it is possible for behavior and mental activity to arise outside of our immediate awareness and control. They do not run through the “free will” filter. If we acknowledge that this is possible then it seems reasonable to acknowledge the further possibility that choice itself, our apparent exercise of free will, restraint and deliberation, can also arise through causative factors outside of our awareness. Under this paradigm, we might say that choice is what happens when our brain deals with competing interests. Even choosing to get up and get a drink is in competition with a desire to conserve energy and stay where you are. We have a remarkable feedback system that can recall past experiences and forecast future experiences. These work themselves in to the choice equation and sometimes we can spend considerable time and energy in deliberation as the network keeps pulling up data on both sides of the tug-of-war and reconfiguring itself in response.
The insistence that we make choices independent of causative influence begs the question. It assumes that our identity is fully contained within a singular, unified, independent perspective; in short, a ghost in the machine. Yet, if we ask someone who has flinched whether they chose to flinch then they’re most likely going to say that it wasn’t a choice while at the same time agreeing that they acted. Likewise, we will not deny that it was us who performed automated tasks, even if we weren’t fully aware of what we were doing. So in some cases our action can come from some sort of involuntary aspect of our self. That is, we do not always disassociate our self identity from the actions which were not clearly “under our control”. If we accept that this is a part of who we are and that the line between voluntary and involuntary does not demarcate our identity, then I see no reason why the abolition of libertarian free will should be seen to annihilate the self and render us incapable of choice. Instead, our conception of the “self who chooses” must be revised so that it is consistent with the fact that we already include our involuntary self in our identity. We dispose of the idea that we are a singular, unified and independent soul and find that our identity is multifaceted, distributed and interdependent. Incidentally, a rare group of split-brain patients have offered us a fascinating window into how this works, as do patients who have experienced certain brain injuries (see blindsight, visual agnosia and hemispatial neglect). It appears that this distributed view of the self is the more accurate perspective.
You should believe that you can make choices
As demonstrated by the original quote at the top of this post, it is common to see claims that the rejection of libertarian free will is also the rejection of choice. I will address that claim further in the next section, but first I want to briefly review why you should believe that you – this new, complex, multifaceted you – can make choices. When we believe in free will:
We are less likely to harm each other and more likely to help each other (Baumeister 2009).
Given these results, the evidence seems to suggest that we prefer the versions of ourselves who believe in free will. The pragmatist follows by suggesting that the rational thing to do is to believe that we actually possess this freedom.
But I can’t just pretend for the benefits
I completely understand the objection and agree that in the short term we can’t choose our beliefs – but I’m also pretty sure that you don’t have to pretend. Even when you think you can give a reason for your choice we can always just ask why again, and keep asking why until you get to the point of saying “I don’t know”. Eventually you will get there, which means that as far as we can tell from pure introspection, there appears to be something unexplainable going on. This is where we find our “free will”.
It is possible that there actually is no prior cause at the bottom of this search but, as we have seen, it is also possible that the prior causes are simply elusive or inaccessible. If you disagree, please explain to me how this kind of experience would differ from the experience under libertarian free will. I don’t see a difference and, introspectively, we have nothing but our experience to go on. So, if our internal experience regularly lacks a fully formed understanding of causation and if we recognize that we can choose between options, why does it matter whether or not our choice is actually uncaused? Pragmatism takes over when explanations run dry and suggests that instead of looking at causes, we should look at effects. We feel a sense of control and operate with the experience of control and this results in outcomes which accord with our choice. Is this not sufficient?
From a purely experiential perspective, I make choices. If there is no libertarian free will then I may end up in bed, shut off from the outside world because all prior causes led to that condition. However, it is equally true that all prior causes may lead me to fight off the melancholy and seize the day. We don’t know which is the future path of the causal chain, yet we detect an ability to direct it. The internal experience is the same; our sense of freedom is present no matter what. This is all that matters when it comes to the choices we make. You needn’t sacrifice your freedom on the alter of fatalism. You have a choice.
If you have read this, and you find yourself agreeing with my conclusions, then it is possible that your experiences have now changed you so that you are more inclined to invoke your sense of free will. Ironically, you have just been externally caused to have a greater sense of freedom. Run with it.
I’ve done a lot of introspection this Easter season on what Christianity is, if not truth. It doesn’t seem rational to abandon a widely held worldview without at least trying to explain why it has been accepted in the first place. So how would a naturalist explain the origin and adoption of a worldview which centers around a crucified leader, and does that explanation make sense?
The birth of Christianity in 200 words or less
A charismatic sectarian from Galilee speaks out against the religious establishment and preaches repentance in preparation for the end of days – an end which infers Israel’s divinely mandated world domination. His followers eagerly anticipated this grand reversal of fortune but then he was killed because his message and growing profile was seen as a threat to the Roman state. Something happened that led to a belief that he may have been resurrected and this coupled into a hope that maybe his mission wasn’t done. The resurrection hypothesis and the apocalypse hypothesis fed off of each other, along with a few select passages in Psalms and Isaiah, to reinforce the story. Paul comes along and is inspired by the story but is compelled to more fully explain why the messiah was killed. He develops an extensive reformulation of the Judaic sacrificial system into a robust atonement theology which grows to become the foundation of Christendom. Collectively we are left with an intriguing story of sacrificial love, redemption, acceptance and hope that offers a remedy for our desire to belong and a salve for our deepest fears.
The birth of a new perspective
The kind of explanation given above may not be new to those who have already examined these things from a critical perspective, but it is to me. You see, when all your information has come from inside the Christian bubble the logic flows in reverse. You start with the assumption that Jesus came to offer salvation and so had to die – not that he died and so that needed to be explained. This is a complete reversal to the order of operations that I’ve known my whole life and if I’m honest I have to admit that it makes a lot of sense.
The psychology we encounter from this new perspective goes well beyond the New Testament. The Old Testament as a whole is dripping with angst. Israel is sick of being a doormat. They sit at the junction between Egypt and all other world powers and are constantly caught in the crossfire. Some have suggested that the bulk of the Tanakh is effectively the rallying cry of a trampled people, saying “we have conquered once, we will conquer again”. That may be a bit of a short sell but the overall theme seems correct.
The birth of a new revelation
There have been many revelations for me on this journey. It is amazing how many of the mysteries of the Bible begin to unravel once you allow yourself to see it as a human creation. The dynamic between history and theology becomes one of cause and effect. Theology is no longer a message handed down on high from God but rather a very real psychological and emotional response to the events of our world. Ironically, this has given me a profound respect for the beauty of the humanity that can be found in the Bible; more so than ever before.
On this journey I have finally allowed myself to ask “Why did the author write this?”, instead of “What is God saying to me?”. As a Christian, I treated the Bible like something of a textbook; an instruction manual to be studied. I wanted to understand what God was saying. I was oblivious to the experiences, desires and perspectives that its authors brought to the text. In retrospect it’s a bit embarrassing to admit how blatantly I ignored this, though I still find myself befuddled when trying to parse a Christian explanation of how the Bible is the product of both God and man. I guess it’s easier to just act like it came straight from God and gloss over the human role.
Where I once sought divine guidance, I now see an epic anthology that chronicles a psychological struggle to cope with the chaos of a world outside of our control and the tensions that strain our will. It’s not hard to see how this has spoken to us throughout the centuries. We all fight to see our way through the obstacles that life hurls our way and to resolve the conflicts that torment our soul (metaphorically speaking, of course). How comforting a prospect it is to suggest that this isn’t just chaos; that behind it all there is a magnificent plan that ends with a glorious victory! The full embrace of the Christian message can give us peace and rest. Who doesn’t want that? I for one wish it to be true, but that is a verdict which seems more distant with each step that I take. My rest will not be found where I am engaged in an unending struggle for truth.
As I reflected on Steven Pinker’s book “The Blank Slate: The Modern Denial of Human Nature” I was struck with a notion that had never before crossed my mind: could it be that my view on human nature during my formative years contributed to a cognitive style that would eventually lead me to question my faith? Or, simply put, did religion make me a skeptic?
The primary argument of Pinker’s book is that the political left too often ignores our innate tendencies and erroneously acts as if people’s behavior can be molded entirely through their social context (hence the blank slate). He suggests that this kind of thinking is in part responsible for the brutal social engineering programs of Stalin, Mao Zedong, Pol Pot and the like. On this point, I think he is on the mark. It is foolishness to reject the existence of human nature or expect that entire societies will abandon their very nature. On the whole, human nature will win out.
That said, Pinker is concerned with the broad social implications but never addresses what it means if an individual comes to recognize their human nature and strives to proceed accordingly. If human nature is predominantly revealed in our “fast thinking” (System 1), as it would seem to be, and this can sometimes be overridden by our “slow thinking” (System 2), then the implication is that those who learn to recognize these tendencies and who train themselves to rely on System 2 as much as possible are more likely to make decisions which are driven by empirical information and are thus less influenced by human nature . These people are said to have an analytic cognitive style.
It seems to me that I am among this group and that it is largely responsible for the path I currently walk; and I am not alone. The war cry of the skeptic is a promotion of critical thinking, reason and logic. Studies have shown a negative correlation between analytic cognitive style and religious belief and the vast majority of deconversion stories I encounter focus on the person’s critical assessment of the evidence. Even so, most Christian apologists would advocate a liberal reliance on reason and careful analysis. Though these very same apologists claim that unbelief is rooted in some deeper moral objection, it is evident to me that the primary force behind loss of faith is a thoughtful reflection on the data.
I was raised to believe in the Pauline struggle; to believe that I had a sin nature (flesh) which was at war with my spirit and that this war could be won by aligning my will with God’s. My instincts were corrupt and needed to be held in check. Living by the flesh comes easily and naturally, so be on guard. In psychological terms, I was taught to recognize the tendencies of System 1 and employ System 2 to overcome them. When this background is applied to the theory presented above, it would suggest that my Judeo-Christian perspective on human nature may have been partially responsible for my cognitive style. In other words, it may be that I question my faith because my faith taught me to question myself.
On the other hand, it could just be my nature. I would even venture to say that it is likely that I am naturally inclined toward a critical approach. History tells us that the religion we’re born into is likely to stick with us and a myriad of research tells us that our personalities are most strongly dictated by our genetics. But what if there’s more to it? If there’s any truth to the idea that the development of our cognitive style could be influenced by our childhood perspective on human nature, and that those with an analytic cognitive style are less likely to embrace religion, then the implicit result is not just swimming in irony; it’s drowning in it.
So, did religion make me a skeptic? Honestly, I doubt it…. but what’d you expect?
I’m going to break from the normal recipe here and discuss something I’ve encountered recently which has left me feeling a bit disappointed. It is not uncommon to find Christian commentary where a lack of belief in God is said to be rooted in some underlying emotional response, usually either disdain for the moral implications of Christianity or a stubborn insistence on wanting to be in control of one’s life (aka pride). The same is often said of unbelief’s more palatable cousin, doubt. I know this is nothing new and I have seen it many times before but these recent encounters compelled me to comment.
The most recent exposure came in listening to the Unbelievable podcast where Christian philosopher Jeff Cook argued that unbelief (and belief, for that matter) is a product of desire. The direction of the podcast often wandered and I never felt like the point was adequately explained so when I went to look for a better explanation I found that his thesis looked to be at least in part inspired by a quote from Blaise Pascal: “Men despise religion. They hate it and are afraid it may be true. The cure for this is first to show that religion is not contrary to reason, but worthy of reverence and respect. Next make it attractive, make good men wish it were true, and then show that it is” (Pensees 12).
A couple additional recent encounters came from reading Lee Strobel’s book The Case for Faith. In chapter 8 he quotes Lynn Anderson as saying “I personally think all unbelief ultimately has some other underlying reason. Sometimes a person may honestly believe their problem is intellectual, but actually they haven’t sufficiently gotten in touch with themselves to explore other possibilities”. Strobel then introduces the next chapter, the conclusion, with a quote from Ravi Zacharias, “A man rejects God neither because of intellectual demands nor because of the scarcity of evidence. A man rejects God because of a moral resistance that refuses to admit his need for God”.
I also had a vague recollection of related statements in some of William Lane Craig’s podcasts or debates, so I went looking and found a Reasonable Faith Q&A article littered with similar sentiments.
In the Case for Christ, Strobel himself also repeatedly infers that this was the primary roadblock for him. On multiple occasions he openly admitted that he did not want to believe Christianity primarily because he did not want to give up his immoral lifestyle. These statements stood out to me because they felt hollow. So let me explain.
What about me?
I’m not opposed to the idea of God. I generally agree with most of the moral principles encouraged by the Christian church. I would prefer that there be an afterlife. I see great value in living a “Christian life” – giving, serving, loving, forgiving, communing, hoping. I’m not a control freak, maybe even a bit of a pacifist. I’m currently inclined to believe in a form of determinism, which one could argue is perhaps more humbling than a Christian view of libertarian free will and surrender to God.
I think I could probably go on for a while, but I hope you get the point. In my introductory post I said that I started this journey because “I cannot, in good conscience, continue to accept ignorance as my position on so many matters”. The only emotional component there is the discomfort I feel when I deliberately look past evidence that challenges my beliefs. I am not motivated by a desire to be free from the shackles of a god who imposes himself on my life. I have never viewed Christianity that way and could probably give you a good theological argument to back it up. Psychoanalyze all you want, but I feel like I’m being as honest as I possibly can. The only desire that I am motivated by is the desire for truth.
Mr. Cook is correct to say that much of the “new athiest” propaganda contains emotional appeals to the undesirable aspects of religion and the god of the old testament but, in my experience, that is not a fair representation of people’s primary reasons for unbelief. Even if we take the undesirables into account, I would argue that the weight of those claims lies not in the emotional response but in the fact that they are contradictory to the more broadly accepted character of God; and contradiction is evidence that something is amiss.
It’s probably true that those who claim that unbelief is grounded in an emotional desire would concede that it does not apply to everybody. I can appreciate that, but here’s the thing: I don’t think that I am the exception. For those who imply that most unbelievers have emotional reasons for their unbelief, where is the evidence to back that up? When I peruse the seemingly infinite forums and discussions where the God debate rages on, it appears to me that unbelievers typically explain their position as arising from an intellectual argument. Why not take that at face value? Certainly there are unbelievers for whom their worldview is primarily driven by emotion, but I’m deeply skeptical that they are even close to the majority.
Turning the Tables
I think that this claim is often made in Christian circles because it offers an explanation for why somebody does not accept what is so readily apparent to the believer. In essence, the claimant is saying “The evidence for God is overwhelming – you must have some ulterior motive for not believing.” The implication here is that the unbeliever is deliberately turning a blind eye to the evidence because they don’t like where it leads. I would like to suggest, however, that perhaps this is exactly what the claimant is doing.
Could it be that the Christians who makes this claim are, at the core, primarily interested in reassuring themselves that they’re right? Could it be that they are seeking to reaffirm their position by asserting that the evidence is so strong that nobody could rationally reject it? Could it be that the possibility of a poorly evinced faith is so uncomfortable that it stirs them to claim that it is the opposing view, not they themselves, who are emotionally driven? Could it be that the claimant is simply unwilling to admit that they are doing the very thing that they accuse the unbeliever of – believing more on the strength of emotion than on the strength of the evidence?
Of course, all the same questions could be asked of the unbeliever but that just serves to point out the futility of the claim. I am on this journey because I do not see that the evidence for Christianity is overwhelming. I did not arrive at this point by following some gut reaction – I have given substantial consideration to countless arguments and data and plan to continue doing so for the foreseeable future. So, yes, it bothers me when somebody implies that there’s something subversive behind it all. If that’s what you think is going on then I would like to suggest that you take a moment to go look in the mirror.