I took advantage of the new .blog top-level domain to do something I’ve wanted to do for some time – I’ve transitioned the blog to my own hosting account, now resident at measureoffaith.blog. This is mostly driven by the fact that I’m a geek and want to try some things that couldn’t be done on a standard wordpress.com account, but for the time being everything should look pretty much identical to the way it did before. The transition is supposed to be seamless for all followers and readers and all permalinks should still work, but please let me know if you discover any issues or have any suggestions for the new site.
A while back I wrote a post titled “What is a moral claim” that did not do a good job of getting at the heart of the topic I was actually aiming to address. So I wanted to recalibrate and go beyond asking “what is a moral claim” by offering an answer. That has turned into a rather thoroughgoing presentation of what I now consider to be the moral ontology which is most likely true. Sorry for the length, but I hope its worth the effort.
First, some moral epistemology
I am of the opinion that epistemology should inform ontology (and vice versa). In other words, understanding how it is that we know about something should play a role in defining what we think that something is. Likewise, our understanding of what something is should play a role in defining how it is that we know about it (I covered this more generally in My Ontology – Part 1). I have found that the discussion of morality, particularly in the God debate, often focuses on moral ontology – we like to talk about what morality is without giving too much thought to the epistemology. By asking “What is a moral claim?” in that post last year I was aiming to explore how moral epistemology might inform our moral ontology – contra William Lane Craig, who suggests we should just posit our desired moral ontology and then define our epistemology as a follow-on.
My assertion in that original post was that we can recognize moral claims, and distinguish them from other claims, and that this tells us something about the nature of morality. As was noted by several commenters, this supports nothing more than the notion that morality is at minimum a distinct mental concept. However, I was aiming for something more…
The moral referent
In one of the comments on that original post Dave compared morality to beauty, to which I replied by noting that:
“This is the question of the referent. For beauty, we can generally link the shared concept to ‘the way we feel about certain sensory perceptions’, like sunsets, music, etc…. There is a class of experiences which trigger a similar response in us and so we call those things beautiful.”
This gets to the heart of the matter. As with beauty, there must be some referent which shapes the concept of morality and, as with beauty, it appears that the best we can do is to introspectively trace this to a particular feeling. Just as the concept of “tree” is informed purely by the phenomenal experience of trees (and not through some special metaphysical access to the abstract ideal tree) the concept of beauty is informed by the phenomenal experience of conditions which trigger a particular feeling. Isn’t it most reasonable – perhaps even obvious – that morality is no different?
But there are trees out there in the real world which are separate from our phenomenal experience of them. What is the corresponding reality which feeds into the concept of morality?
When I presented my ontology, I identified universals as mental concepts which are constructed as generalizations of our experience of particulars. The particulars which inform a universal need not be mind-independent, objective entities. Despite the connotations of our language (e.g., “that’s a beautiful sunset”), most of us are not inclined to actually assign beauty as an intrinsic property of the object of our perception, but we rather accept it as a subjective component of our experience; beauty is in the eye of the beholder. Likewise, we’re all familiar with the concept of sadness, but not because it exists ‘out there’ in some sense, but because we are all human and have been able to relate a similar internal state to a common idea which we can communicate. My proposal is that morality is like beauty and sadness. Morality is informed by my phenomenal experience of the feelings and intuitions which arise under certain circumstances.
I take it that the view I have presented for sadness and beauty is fairly uncontroversial, but for some reason morality is a different beast. We struggle against the prospect that the subjective experience of the feelings and intuitions which have informed our conception of morality might be wholly subjective – it’s uncomfortable to suppose that there isn’t an objective reality against which we can hold others accountable and point to and say “No! You’re wrong!” How do we account for this relatively unique property of the moral experience?
The social theory of moral origins
I have been hesitant to adopt the standard naturalist explanation for the origin of morality as an evolutionary product of our social heritage. Regardless, I have since come to accept that the evolutionary development of a moral faculty driven by social selection pressures is quite plausible. In the following sections I attempt to summarize the key evidences and reasoning behind this conclusion.
Prosociality in non-human primates
If morality is an evolutionary product then there should be traces of it in other species and, in fact, morally relevant sociality is a characteristic of our closest evolutionary relatives (and beyond). This is perhaps best described by just about anything that Frans de Waal has published or, more immediately, his TED talk (below) offers a quick and accessible overview:
Social factors strongly influence our morality
If a social heritage was a key element in the development of our moral intuitions then we would expect to see that social forces have a continued impact on the expression of that morality. This appears to be the case:
- Social Awareness: A multitude of studies have demonstrated that even subtle awareness of “watchers” impacts our moral behavior. This may reflect a biological predisposition, but when we allow that our moral sense is in part a development that arises through our life experience, the social dimension of that development also corresponds nicely with this data point.
- Social Compliance: Setting aside survival instincts, ‘peer pressure’ is perhaps the most capable mechanism for getting us to act in opposition to our moral sense. The Milgram Experiment, the Stanford Prison Experiment, Nazi Germany and, more recently, Derren Brown’s “Push” program serve as some of the more extreme negative examples. However, this applies equally in reverse, where our tendency to realize an arduous moral good is substantially bolstered by encouragement from peers and anticipation of “other-praising” emotions.
- Social Feedback in Moral Development: From a developmental perspective, feedback about character and a disapproving response (a social consequence) is more influential on the formation of our moral sense than is feedback about the moral status of the action itself and a punishing response (a physical consequence).
- Social Comprehension: Our moral intuitions tend to calibrate moral culpability in accordance with the moral agent’s capacities and intentions. This feature depends on an interpersonal judgment built on a theory of mind, such as would be inherent in a socially developed morality where other agents inform that development.
In the end, it is clear that the social environment is a primary factor in our moral behavior even when the social consequences of our behavior lie well beyond our perception. This is consistent with the theory that social pressures have guided the development of our moral sense.
The rider and the elephant
The long-standing traditions of moral philosophy and ethics infer that moral judgment is primarily a rational endeavor, but this appears to be a flawed conclusion. Jonathan Haidt has famously compared our moral sense to a rider on an elephant – the rider being our reasoning process and elephant being our emotionally driven intuitions. There is an extensive body of constantly growing literature on this topic, so for a deeper dive on the role of emotion in morality I will simply refer to the writings of Joshua Greene and Jesse Prinz in addition to those of Haidt.
Regardless, the proposition that our moral sense is predominantly emotional only lends support to the social theory of moral origins when we consider empathy and the explanations on offer for the causal link between morality and emotions. Claus Lamm is one of the more prolific researchers of empathy and is a cautious voice at a time when many are hailing mirror neurons and empathy as the underpinnings of our moral intuitions. Despite this caution he affirms that “there is compelling evidence that similar neural structures are activated when empathizing with someone and when directly experiencing the emotion one is empathizing with” (here) and that “There is some support for the above-mentioned role of empathy in morality, although the direct link between empathy and morality remains rather unclear and requires further investigation” (here).
I hope to heed Lamm’s concerns but I also cannot help but step back to view the big picture and see a tidy set of links wherein our moral intuitions are largely dictated by an emotional elephant whose course can be directed by the neurological capacity to take on the perspective of others – a definitively social faculty. The cohesive picture this paints is compelling and when one considers the implications for moral origins, the social theory seems a natural fit.
The last piece of evidence I wish to present for the social theory of moral origins is the very concern which instigated this discussion – the apparently innate drive toward moral agreement. The desire to hold others and ourselves accountable to a particular moral standard has led many to conclude that morality itself is objective (in fact, this is the only non-pragmatic reason I am aware of for the claim of objectivity) but this phenomenon is also explained if our moral sense was developed through social pressures. To say that selection occurred through social pressures is to imply that there is a social dynamic to the evolutionary pathway. This, in turn, requires that there be some sort of reproductive advantage to the selected pro-social tendencies. However, a lone altruist among a band of free-riders is unlikely to realize any advantage. The advantages which arise from prosocial behavior are then also dependent on reciprocity and cooperation. This means that the development of prosocial behavior is most readily accomplished in coordination with the development of proclivities which favor agreement and reject disagreement with respect to those behaviors. The end result is not only a tendency toward prosocial behavior, but a tendency toward favoring agreement on those behaviors.
Some will object here and suggest that our intuitions regarding the objectivity of morality are more like the intuitions we have regarding the veracity of a proposition (e.g., I am sitting on a chair) than they are like a drive toward agreement with others. I’m not sure this is a proper assessment, but I do agree that on the spectrum of intuitions about an entity’s objectivity, our moral intuitions are generally weighted closer toward the ‘objective’ end compared to more broadly subjective claims like beauty, ice cream flavors, etc… This is perhaps most evident in the language we tend to employ in moral discourse, where objectivity is often inferred (though not always – and this inference is certainly also frequently employed in other domains that are generally regarded to be subjective). That said, I’ll offer two thoughts in response:
- As noted above, morality is deeply entangled with emotion. The majority of other subjectively informed claims do not carry the same emotional weight, and this is a significant component of the perceived difference and the drive toward absolutes. That is, the strength of the underlying emotions compels us toward an unwavering perspective. There may even be some degree of a subconscious post-hoc rationalization informing an intuition of moral objectivity. The emotional elephant leads the way and the rider can only make sense of the world by rationalizing the course it’s taking as if that is simply reflecting the objective facts about the world. Neuropsychology is replete with examples of how our cognition engages in this kind of post-hoc rationalization and confabulation.
- Though speculative, it is not unreasonable to suggest that the evolution of our moral sense may have incorporated the same faculties which bear on our sense of objective veracity if this improves the effectiveness of morality as a motivating factor. Despite the protests of anti-realists, the data does seem to indicate that moral realism is more conducive to moral compliance than is anti-realism (see one, two, three). This makes intuitive sense – if we think that our moral judgments do not have any subjective wiggle room and we can thus be held objectively accountable to those judgments, then we are more motivated to align our behavior with those judgments. So if our moral sense evolved to incorporate some of the same cognitive machinery that helps us judge the veracity of non-moral propositions then the moral sense would be more effective in eliciting the advantage of moral behavior. The net result would be the subjective perception, to some degree, that our moral judgments are in fact objective. Subjective preferences like beauty wouldn’t carry the same selective advantage and so wouldn’t bear the same character in this regard.
Social origins objection #1: Widespread non-social moral intuitions
So what about those pervasive moral claims which are devoid of social impact? For example, why have so many cultures moralized purity and why has disgust been shown to influence our moral judgments? How does the social theory of moral origins explain this?
The first point to make on this topic is to note that whereas some moral claims are devoid of a direct social impact, they are typically not insulated from social feedback. In particular, the anticipation of shame is a significant factor in motivating against non-social behaviors which have been moralized.
Second, there may very well be an indirect social impact. In the case where purity or disgust is linked to the non-social moralized behavior we can note that an inadequate avoidance of pathogens is not only detrimental to the individual but also to that person’s social circle. The germ theory of disease converts a seemingly non-social disgust instinct into a socially relevant behavior, such that social judgment that accompanies moralization may in fact be efficacious.
Lastly, if our moral sense is largely an adaptive product of evolution then the evolutionary path is predicated on the behavior which corresponds with our moral sense (because the feelings themselves offer no selective advantage apart from behavior). Evolution favors efficiency, so it is likely that the neurological systems which serve to guide our behavior in general (through the feelings which motivate and inhibit) are also involved in our moral sense, such that there is some level of commonality in our interoception of the morally relevant motivations and the motivations which influence other aspects of our well-being. This would imply that there isn’t a ‘moral’ category that cleanly distinguishes moral interoception from other interoception. So even if the majority of the intuitions that we have categorized as ‘moral’ carry a social relation, it is reasonable that other, non-social intuitions may seem to fit that category as well.
Social origins objection #2: Culturally constructed morality
Many anthropologists have argued that morality is memetic, not genetic. That is, they suggest that the moral sense is learned and acquired from one’s environment – specifically, one’s cultural influences. I think there’s some truth to this perspective, but I don’t see that it is mutually exclusive with an evolutionary explanation. It seems quite evident that cultural influences serve to inform our moral intuitions but this alone does not explain the aforementioned ‘moral referent’, that distinct component of our interoception. I do not doubt that one’s moral compass is informed by their environment but it’s the compass itself that is primarily of interest here, and culture does not explain it’s existence in the first place.
This is an important concept when it comes to the discussion of moral progress. If morality is defined to be nothing more than a cultural construction then the realist is correct to suggest that there is no such thing as progress. However, if there is a biological basis for the moral sense then progress can be assessed relative to that faculty. Even if there is variability across persons, there is still a common origin that fosters some level of agreement at a fundamental level. Here anthropology re-enters the picture to support the notion of an innate moral nature, as elucidated in the work of Donald Brown and Richard Shweder. This is not to suppose that we can necessarily determine right and wrong answers to individual moral claims by reference to that nature alone, but rather to say that there is a general bent which our species shares.
What is a moral claim?
This was the question I asked long ago and hoped to also answer here. In case the preceding discussion has not made it clear, I am arguing that morality is the concept which refers to a particular set of feelings and intuitions that arise as a result of predispositions which developed in our species through social pressures and are shaped and influenced by our development, experiences and reasoning. As such, a moral claim is simply a claim which implicitly or explicitly refers to those feelings and intuitions (or their absence) as if they were properties of an action, person, object or event. This perspective entails a particular moral ontology, namely …
So it seems that in adopting this view I have officially joined the moral relativist camp. I am quite comfortable with the epistemology and ontology this entails (as outlined above) but these are not informing my conclusion in isolation. Other considerations include:
- Dependence on biology: Though I have already touched on this to some degree, there is much more that could be said. Neuroscience has increasingly demonstrated how variations in our neurology bear on our morally relevant judgements and behavior, as most famously illustrated by the classic cases of Phineas Gage and Charles Whitman (also see Patricia Churchland’s ‘Braintrust’ and, more briefly, David Eagleman’s article in the Atlantic for overviews). While this state of affairs is not logically inconsistent with moral realism, it is more parsimonious with a relativistic ontology.
- Moral diversity: In accordance with the biological dependence noted above, we observe that these variations manifest themselves in widespread moral disagreement. Though it is true that there are many claims where moral agreement abounds, and even some fundamentals that are nearly universal, it is also the case that moral disagreement is more rampant than is found in objectively arbitrated claims. That is, we are more likely to disagree about a moral claim than to disagree about a claim that is based on empirical observations. As before, though this condition is not incompatible with moral realism, it highlights a divergence from the ontologies we posit for most of the entities that we identify as objective and so it is in that sense unexpected. Conversely, such diversity is entirely expected under a relativistic framework.
Epistemology and ontology aside, relativistic normative ethics is admittedly troubling. Not because I am forced to subscribe to Dostoyevsky’s “all things are permitted” – the shallow characterization of relativism which completely abandons both normative ethics and moral discourse and is often parroted by theistic apologists. No, the trouble is that normative ethics are inherently social and even when we employ frameworks which seek to satisfy our moral intuitions about fairness and reciprocity, such as social contract theory, we are unable to realize the ideal. The application of a normative ethic at the social level will require some level of subjugation wherever there is genuine moral disagreement. Perhaps this is simply an inescapable tension which is intrinsic to our moral sense; a consequence of the unavoidable competition between the benefits of both freedom and cooperation. Just as the realists must concede the inability to objectively arbitrate the moral truths to which they subscribe, perhaps the relativist must concede that the implementation of normative ethics cannot escape the morally distasteful act of imposition. Thrasymachus made a similar observation 2500 years ago and as far as I can tell we’re no closer to a solution. It’s worth continued discussion, but I have grown increasingly skeptical that it will ever be resolved.
Moral relativism also does not mean that we surrender our ambitions of moral progress. There is a human nature and even pervasive moral intuitions are sometimes inconsistent, or in conflict with our nature, or uninformed or misinformed by errant beliefs. Moral discourse and experience can elicit change so that our moral judgments are more accurately aligned with reality and with our inherent nature. Relativism does not mean that we accept all moral claims as equally true. It does not entail pacifism, complacency or anarchy. It does not ask us to ignore our sense of indignation and stand idly by. No, none of these strawmen are true if you’re willing to scrutinize your moral judgments. Can a moral relativist tell somebody else that their behavior is wrong? Yes, but be ready to expose the inconsistencies and faults in their reasoning. Can a moral relativist promote or discourage social policy? Yes, but be ready to use evidence to justify your position, preferably with reference to fulfillment of human nature. Can a moral relativist fight back or intervene when they perceive wrong? Yes, of course. I’m not sure I understand why I even feel the need to answer that question but the rhetoric around this issue suggests that I do.
The big objections
Which leads to the big question. It was going to happen eventually, so I might as well put Godwin’s law into effect now: “Relativism, huh? So the Nazis weren’t wrong?” Under relativism I am able to say that the Nazis were wrong according to my intuitions and those of everybody I know, but I’m not making an absolute claim. Notice that the framing of the objection begs the question for moral realism, so it’s a bit of a trap that tries to force a response within the bounds of that assumption, pushing one to grapple with the intuition toward objective morality that was the focus of the prior discussion. That said, it seems to me that it’s also very reasonable to argue against the legitimacy of the Nazi program on the grounds of errant beliefs and an inconsistency with the moral nature of those who carried out the program. Furthermore, as noted above, there is nothing about relativism which entails inaction or ambivalence toward those with whom we disagree.
“and there’s nothing wrong with torturing babies for fun?” Again, I am perfectly able to say that this is wrong according to my intuitions and those of everybody I know, but I’m not making an absolute claim. However, this is a bit more difficult because there isn’t any reason in this case to also object on the grounds of errant beliefs or conflicts in human nature. If an individual were to be biologically disposed so that they did not find this behavior morally abhorrent then I have nothing but disagreement to offer (though I would argue that in a practical sense, the realist is in the same position). As before, this does not entail inaction or ambivalence.
The last word
In the end, moral relativism is neither pacifism nor a blank check. It requires introspection, reasoning, evidence and discourse. We sometimes act in ways which are in opposition to our true values and intentions; we experience regret. Relativism suggests that you take a hard look and try to understand those values and intentions – to consider whether they actually align with your nature and to examine how they are best achieved – and then to direct your life accordingly. You will still mess up, but at least you are trying and that diligence can eventually shift the underlying feelings and intuitions into closer alignment with reason and, hopefully, reality.
“Ha! Caught you. That’s self-defeating! You can’t say that moral relativism requires scrutiny of our moral judgments! That’s an absolute moral claim!”
I have indeed made a normative assumption, but that assumption was not moral. It was an assumption about the reliability of cause and effect. So allow me to rephrase: moral relativism is most rational and most able to accurately satisfy our morally relevant desires when coupled with introspection, reasoning, evidence and discourse.
I embarked on this truth-seeking pilgrimage four years ago and in doing so devoted myself to following the evidence wherever it leads. Accordingly, I have refrained from aligning with any particular moral theory for most of that time. It is an incredibly complex, confounding, divisive and emotionally draining topic. Evidence is difficult to gather and interpretations abound. So while I have finally taken the step of adopting a moral ontology, it is perhaps more tentative and provisional than any other position that I have staked, even as I recognize that this hesitancy is almost entirely emotionally motivated. Regardless, if you disagree with the conclusion then you are welcome to try and change my mind. That’s why I’m here.
For some time I have been slowly working through a gargantuan post that aims to review and comment on each and every one of the 355 Prophecies Fulfilled by Jesus (and there’s still a long way to go). In the course of that process I’ve had to put some thought into the concept of typology, which claims that some earlier entity or event (E0) is a type, or prefigure, of a later entity or event (E0+t). With regard to prophecy, the idea is that E0 is directed toward E0+t in a teleological sense – that is, E0 existed for the purpose of serving as a pointer to E0+t. As I see it, this is a type of retrocausality, in that we could say that we have E0 because of E0+t. My understanding is that this was commonly accepted as a valid perspective in the ancient world, which stands in contrast to a more modern, “scientific” conception of causality that operates only according to the arrow of time.
However, I have also been reading Sean Carroll’s ‘From Eternity to Here’ which, if I’m understanding correctly, suggests that the temporal causality we see (that earlier events ’cause’ later events) is merely a macroscopic artifact of the universe having started in a low entropy condition. At root, all physical laws are reversible, such that there isn’t really a direction of cause and effect – there’s just a universal trend from lower to higher entropy because high entropy states are simply more probable than low entropy states.
So now I find myself intuitively balking at the nonsense of the retrocausality suggested by typological claims while simultaneously pondering this entropic perspective on time and the reversibility of physical laws, and subsequently wondering whether E0+t really can be a valid part of the explanation for E0. I’m not sure I’ve really wrapped my head around this, so I’m hoping for some additional insight from any readers who feel like they might have something to offer. In short, does a properly scientific perspective on time and causality lend credence to notions of retrocausality, such as we find in claims of prophetic typology?
Note that I am not suggesting that prophetic typology claims would thus become the best explanation for an identified relationship between E0 and E0+t as a result of this perspective. We can still identify the best (i.e., more probable) explanations according to the probabilistic description of entropy, which we perceive as a causal direction from past to future in accordance with physical laws. The question is only whether those prophetic claims are more compatible with a proper scientific perspective on causality versus the classical view of an inviolable temporal order from cause to effect.
Sorry about all the $2 words in the title. Even if that didn’t make sense, I hope the rest of the post still does.
A couple years ago I wrote a post titled “Reconciling the Crucified Messiah“, where I summarized a naturalist perspective on the origin and ascent of a religious sect that was centered around a crucified leader; which is admittedly a bizarre turn of events. That post briefly discussed the development of Christian atonement theology as a consequence of the crucifixion and how that reconciliation was critical to transforming a seemingly insurmountable setback into a hallmark of the faith. But this new atonement theology did not entail that the salvation afforded by the atonement is only available to those who believe, and so here I would like to consider another curious yet synergistic development of the Christian movement: the introduction of doxastic soteriology (doxastic = “related to belief” and soteriology = “doctrine of salvation”, so a doxastic soteriology is a doctrine in which salvation is in some sense dependent on belief). I propose that this was largely driven by eschatological concerns (i.e., related to the end of the world \ final judgment).
Despite my Christian bubble having been popped almost four years ago, it only recently occurred to me that belief in Jesus (as messiah, lord, savior, etc…) might not have been viewed as a requirement for salvation in the earliest days of the movement. A doxastic soteriology certainly doesn’t appear to have been part of the mainstream Judaism to which Christianity owes its roots and, from a naturalistic perspective, it seems highly unlikely that Jesus himself taught that people had to believe in him to be saved, despite what the Gospel of John portrays.
So what happened?
There are several points of contact which show that the Nazarenes (early Christians) shared some influences with the Qumran community (whether directly or indirectly). Among these is an eschatological perspective in which the demarcation between the elect and the damned fell not along ethnic boundaries, as was implied by traditional Judaic eschatology, but rather around ideological boundaries. To the Qumran community, the elect were those who aligned themselves with the community lifestyle and ideology. It appears that this perspective was in part driven by a perception of religio-political corruption (e.g., the “wicked priest”) and the wish to exclude undesirable religious figures from Yahweh’s kingdom – a theme that is mirrored by the gospel narratives and was quite possibly an element of Jesus’ teaching. A similar shift was also occurring throughout greater Judaism in the second temple period. Ever since the Babylonian exile, the Jews had been trying to figure out how to deal with the diaspora and cultural intermingling. The rise of decentralized worship in synagogues and the need to accommodate cross-cultural relationships spurred a decline in the traditional ethnocentric eschatology that the earlier prophets sought as they lamented the conquests of Israel. As a whole, the Judaic quest for future justice was gradually transitioning from an ethnic foundation toward ideological foundations.
Combining this with the widely accepted understanding of Jesus as an eschatological prophet, we can imagine that Jesus and his followers considered themselves to be bearers of the gospel, where the good news was not that Jesus was going to die for your sins, but rather that the end of days was imminent – perhaps even facilitated by Jesus’ prophetic ministry – and that you too could be part of the eternal kingdom if you repent and adopt the lifestyle and ideology of their sect. This message may have even neglected ethnic boundaries. From this we can see that the seeds of a doxastic soteriology were present in Jesus’ message, but were only germinating. After the crucifixion, more changes came into play.
First, we have the Nazarenes continuing to proclaim their eschatological message despite their messiah having been killed and, furthermore, cursed by Yahweh as a consequence of having been hung and left exposed on a tree (Deuteronomy 21:22-23). Though the Nazarenes appear to have wanted to remain Torah observant, their message became increasingly disagreeable and divisive as they continued to exercise midrashic liberty in defense of Jesus as messiah. As a result, the gulf between their sect and mainstream Judaism grew and they were, as a whole, steadily pushed and pulled away from participation in Jewish communities.
Then, as we consider the growing chasm between the Nazarene sect and mainstream Judaism we can turn back to the Qumran example to see what happens – namely, an eschatological evolution in which the opposing party is excluded from salvation (that is, participation in the eternal kingdom). As a close relative of Judaism, the early Christians had very few distinctions that could be used to draw that eschatological line in the sand. However, above all else, there was one thing that separated them from mainstream Judaism: belief that Jesus was the messiah. And so Christianity’s doxastic soteriology was born. As that chasm continued to grow so also did the prominence of belief as a central dogma of the Christian soteriology, reinforced by the synergistic coupling of a new atonement theology that was dependent on the object of that belief and independent of the temple sacrifices. Going one step further, the adoption of this eschatologically motivated doxastic soteriology also served to emphasize the significance of Jesus and so was perhaps instrumental in his eventual elevation as coequal with God.
A visitor to Nate’s blog caught my attention a couple months ago when he started defending the ‘traditional’ view of Daniel’s authorship and prophetic legitimacy. I couldn’t resist participating in the discussion, given the time I spent studying Daniel, as documented in my posts on the prophecies of the kingdoms, Daniel’s authorship, and whether Jesus fulfilled the 70 weeks. This is an interesting topic due to the potential it holds as perhaps the best candidate evidence for a divine fingerprint on the text of the Bible.
Before going any further, let me start by saying that this visitor, Tom, has compiled the most thorough and reasonable defense of the traditional view of Daniel that I have ever encountered. I commend him for the time and effort that he put into it, even if I disagree with the conclusion. Regardless, in this post I want to review some of the new data I encountered (or more seriously revisited) during that discussion and offer some insight into why I find these ‘new’ arguments for an early authorship unconvincing.
Ezekiel’s reference to Daniel
It is generally agreed that Ezekiel was initially composed within the period of the Babylonian exile, and more importantly, well before the 165 BCE date attributed to Daniel under the Maccabean thesis. This means that a reference to the person of Daniel in Ezekiel 14 would seem to confirm the existence of the person described by the book of Daniel, which is more consistent with the view that it is an early composition. I didn’t give this a lot of attention in my prior study because it wasn’t obvious, for several reasons, that this was a reference to the Daniel of interest (note that it is spelled slightly differently – דנאל in Ezekiel, versus דניאל in Daniel, with an extra yud – so sometimes people give the Ezekiel name as Dan’el) but it also didn’t seem that this was very important given that I agree with the scholars who see that some of the narratives in Daniel probably have their roots in traditional stories that pre-date the Maccabean composition. Regardless, Tom presented this as a key evidence for the early authorship of Daniel and much discussion ensued.
The main thrust of the argument centered around the coupling of Daniel in Ezekiel 14:12-23 with Noah and Job as an exemplar of righteousness, and as exceedingly wise in Ezekiel 28:3. The primary alternate candidate for Ezekiel’s reference is to a Dan’el character known from Ugaritic sources and Tom argued that he is a poor fit due to his non-Yahwist allegiances. Eventually, Nate pointed out that Ezekiel refers to the “sons and daughters” of Noah, Daniel and Job, which is inconsistent with the life of Daniel from the book of that name. This piqued my interest and so I went back and re-read the passages from Ezekiel, at which point I was struck by a new insight.
Ezekiel 14:12-23 presents itself as a message from Yahweh to Ezekiel, offering insight into the nature of his relationship with Israel in light of the Babylonian conquest and exile. Here’s my summary of the message:
If I [Yahweh] pour out just one type of wrath (famine, animals, sword or plague) on a nation, the righteousness of people like Noah, Daniel and Job will only save themselves and no descendants will be spared. However, if I pour out all four forms of wrath on Jerusalem, you will see a remnant survive and their unrighteousness will show you why I brought punishment.
So it seems like there’s two points being made: (1) the calamity which has befallen Jerusalem was not undeserved, and (2) the Jewish nation is special in that God will show them mercy and not wipe them out entirely. Now, if this is a proper understanding of the passage – and I think it is – then it makes absolutely no sense that Daniel, a member of the Jewish remnant, would be named as a member of the nations that would be devastated by the wrath of Yahweh in contrast to the Jewish nation (though Job and Noah, as pre-Abrahamic characters, do fit). For the first time, it became clear to me that Ezekiel was not referring to a contemporary young Jew named Daniel, regardless of whether he was referring to the Ugaritic Dan’el or not.
The Letter to Aristeas
At one point, Tom suggested that my post on the authorship of Daniel had incorrectly identified the commissioning of the Septuagint in the 3rd century BCE as only including the Torah – he believed that the Letter of Aristeas shows that all the books now included in the Tanakh were part of that effort. If this were true, and Daniel was included in that translation, then this would be a defeater for the Maccabean thesis. So I went back and reviewed the Letter of Aristeas again.
First, a little background. The Letter of Aristeas is generally believed to be a later forgery that draws upon a series of possibly historical events which resulted in the commissioning of the official Greek translation of the Jewish law, known to us as the Septuagint. It claims that this was instigated by a suggestion posed to Demetrius of Phalerum, who was in charge of Ptolemy’s effort to collect “all the books of the world” for the famous Library of Alexandria. The majority of the letter references the “law of the Jews”, but in one spot – a purported memo from Demetrius to Ptolemy – it suggests adding translations of “The books of the law of the Jews (with some few others)”. This is the closest thing we get to a statement that the translation included more than the Torah. However, the purported letter from Ptolemy to the Jewish High Priest Eleazer only requests the law and the subsequent response from Eleazer to Ptolemy says that “I selected six elders from each tribe, good men and true, and I have sent them to you with a copy of our law“. At the very best, the initial memo which proposed “some few others” correctly represents the actual effort and those few extra books just weren’t mentioned in the later correspondence. In that case, there may be a slim chance that Daniel would have been included. It seems much more likely, however, that the earliest translation effort only covered the Torah.
References to Daniel in …?
One of the other key arguments raised by Tom was that Daniel had been referenced by several different pre-Maccabean texts. I had not previously encountered this claim for some of the alleged references, so I decided to dig in and take a look. As before, this would be devastating blow to the Maccabean thesis if true.
- Tom proposed that “Tobit contains clear verbal allusions to Daniel.” Here he cites a paper by Roger Beckwith that argues for several different links to Daniel from pre-Maccabean sources. In that paper Beckwith states that:
[Tobit] envisages a second more general return from exile … as the prophets of Israel spoke concerning them, which is to take place at ‘the time when the time of the seasons is fulfilled’. This glorious future rebuilding of Jerusalem and its temple is probably seen by the author as foretold by Isaiah and Ezekiel respectively. But who fixed ‘the time when the time of the seasons would be fulfilled’ for this to happen? Could it be anyone but Daniel?
I was interested to discover that the times and seasons language of Daniel was in fact present in Tobit. However, contra Beckwith, it does not put these words in the mouths of the prophets – rather, this is the language used by Tobit himself in his prophecy. So the direction of borrowing is not established, or even inferred, and it seems equally likely that the author of Daniel picked up this language from Tobit (or from the apocalyptic communities influenced by Tobit).
It’s also interesting to note that Tobit claims to take place after the Assyrian captivity, which would predate Daniel. This means that if one is to argue for the early authorship of Daniel by suggesting that Tobit borrowed from Daniel, then it logically follows that the arguer accepts that there is precedent for Jews producing pseudoepigraphical works that were written after the fact to appear as if a known event had been prophesied. Sound familiar?
- Tom also claimed that “the Hellenistic Jewish historian Demetrius . . . had already . . . drawn up . . . [a] chronology of the seventy-weeks prophecy in Daniel 9 in the late third century B. C.” The footnote pointed us to another publication by Roger Beckwith, but one which is not readily available. Regardless, it didn’t take long to figure out that the original source is Clement’s Stromata Book 1, where it says:
Demetrius, in his book, On the Kings in Judaea, says that the tribes of Juda, Benjamin, and Levi were not taken captive by Sennacherim; but that there were from this captivity to the last, which Nabuchodonosor made out of Jerusalem, a hundred and twenty-eight years and six months; and from the time that the ten tribes were carried captive from Samaria till Ptolemy the Fourth, were five hundred and seventy-three years, nine months; and from the time that the captivity from Jerusalem took place, three hundred and thirty-eight years and three months.
I could not extract any allusion to the 70 weeks of Daniel 9 (which proposes 490 years from the Babylonian exile to the final judgment) and Tom never responded with an explanation.
- Another claim was that “Ecclesiasticus [Sirach] clearly refers to Daniel and contains a prayer that the prophecies of Daniel would be fulfilled soon“. This points us to Sirach 36:6-7 and 14-15, which says:
 Rouse thy anger and pour out thy wrath; destroy the adversary and wipe out the enemy.  Hasten the day, and remember the appointed time, and let people recount thy mighty deeds. …  Bear witness to those whom thou didst create in the beginning, and fulfill the prophecies spoken in thy name.  Reward those who wait for thee, and let thy prophets be found trustworthy.
This whole case rests on the prospect that the phrase “the appointed time” is being borrowed from Daniel and that the ‘prophecies’ are referring to Daniel and not any of the other eschatological prophecies in existence at the time. However, there’s no clear reliance on Daniel and the phrasing of an “appointed time” is also present in other eschatological contexts (Psalm 75:2 & 102:13, Habakkuk 2:3).
- Lastly, Tom proposed that the visions of Zechariah only makes sense if Daniel was already known. I never really understood this argument and explanations were lacking, but I did discover something new in the process of trying to understand it: many scholars suspect that Zechariah and Haggai were once part of a single text. When taken as a whole, it appears very likely that the prophecies and visions therein point toward an expectation that Yahweh’s eternal kingdom would arise through the reign of Zerubbabel and the high priest Joshua (see Haggai 2:6-9 and 21-23, Zechariah 3:8, 4:9 and 6:11-13). This timeline is clearly different than one would expect if Daniel had been in view.
There’s a lot more that was covered throughout the discussion and a ton of material in the document Tom put together but at this point I don’t feel the need to systematically dissect every single argument. The time spent researching these additional claims substantiated my suspicions that the ‘clear’, ‘obvious’ and ‘conclusive’ evidence for an early authorship is just as suspect as the data I had already reviewed. In the end, I feel pretty comfortable with the conclusions I’ve reached thus far – namely, that the book of Daniel, as we know it, was largely constructed in the midst of the Maccabean revolt by building upon a pre-Maccabean tradition to introduce prophecies that appear to predict events contemporary with the author. More specifically, my guess is that chapters 3 – 6 form the core of the pre-Maccabean tradition (though probably not as a unified text, and perhaps only as oral traditions). The dream and interpretation in chapter 4 and the hand-writing interpretation of chapter 5 then served as the inspiration for a Maccabean redaction to create the chiastic text of chapters 2 – 7, adding the chapter 2 and 7 prophecies, all in Aramaic. A second contemporary redactor then built upon this to add the introduction in chapter 1 and chapters 8 – 12 in Hebrew. As a young and volatile text, different versions, additions and arrangements of these redactions were available and are reflected in the Greek translations (LXX and Theodotion). This is my best shot at accounting for all the data.
Yeah, I missed April. Hopefully I’ll make it up with an extra post in the near future.
The argument from design is perhaps the most intuitive and immediately accessible argument for the existence of God and can be analyzed from a myriad of different perspectives. We are surrounded by astounding complexity and see purpose in nearly everything. William Paley was reasonable to suppose that the watch infers a designer and the design proponents are reasonable to say that life is brimming with the appearance of design. But fifty years after Paley’s death, Charles Darwin published “On the Origin of the Species” and the design explanation suddenly had a legitimate competitor.
When I consider the arguments for these two options – design and chance – I find myself repeatedly drawn to a niggling question: if design is correct, why is life designed in a way that is plausibly explained without design? That is, if the designer wanted us to infer design then it would seem that he could have done better. Upon making this assertion, the apologist in my head immediately responds with an emphatic “Like how?”; inferring that I am posing an alternative that may not be viable. In this post my aim is to explore that very question through a few counterfactual conditionals.
Counterfactual #1: Reproduction
The first counterfactual condition I would like to consider goes something like this:
“If God really wanted to reveal himself through the genetic design of living organisms then the mode of perpetuating life would defy a purely naturalistic evolutionary paradigm.”
Those familiar with the intelligent design movement will recognize that this is similar to what those proponents often claim. The arguments are rife with assertions of irreducible complexity and astronomical improbabilities for the spontaneous assemblage of molecules while simultaneously disparaging any plausible explanation for the origin of those structures as ad-hoc speculation. Though it may be true that it is extremely difficult to verify and obtain evidence for those explanations, this does not negate the fact that those explanations are plausible and consistent with the regular mechanisms of nature. Perhaps with a little imagination we can identify a way in which the designer might have made it more clear that life was not a purely natural phenomenon…
Let the earth bring forth living creatures NOT after their kind…
We are really only familiar with one kind of life: the kind where amino acids combine in various ways and facilitate production of new life which is nearly equivalent to the parent(s). We see this in bacteria, flowers, frogs and people. We call it reproduction because the output is essentially a new instance of the producer(s). The variation from parent to child is relatively insignificant compared to the full volume of information embedded in the process. For our purposes here, we can essentially say that A => A => A => …, or, in other words, life form A only begets life form A and nearly all genetic information is carried forward.
Now consider an alternative to this. Collections of molecules regularly interact with other molecules in the environment to produce new molecular structures. In fact, this is exactly what is happening when our DNA guides the production of proteins. Those proteins are wholly different from DNA and go on to perform many functions and interact with other molecules in ways which leads to other changes in chemical structures. These reactions may carry on for some time, maybe indefinitely, without ever going through the same cycle of inputs and outputs. This is like reproduction, but with the key difference that the product has a markedly different chemical structure than the producer. I propose that this scenario hints at a possible second mode of life (unified material which is capable of producing new life) which looks something like:
- A => B => A => …, or
- A => B => C => D => A => …, or
- A => B => … => Z => A => ….
The set of possible Rube-Goldberg like chains of production is enormous, so long as there is a recursive structure that allows us to avoid an infinite regress and constrain life as the set of outputs within the cycle. Otherwise – without recursion – every possible reorganization of matter would be “life” in some weak sense.
What are the odds that life, under the guidance of purely natural processes, would arise to operate under this second mode instead of the first? This question is probably answerable even if I’m not going to try and expend the resources to calculate it here. Regardless, it’s clear that the probability of this occurring by chance is significantly less than it is for the type of genetic duplication we see in the world now. So, at the very least, we have identified a possible mode of life which would have been a stronger indicator of design than is inferred by the current paradigm. Perhaps the current mode of life was intelligently designed, but if so, then it seems that intelligence might not have wanted us to know.
Counterfactual #2: Intelligence
Inspired by a recent post by Nate at ‘Finding Truth’, the next counterfactual condition I would like to propose is:
“If God really wanted to reveal himself by blessing us with advanced cognitive abilities then our cognitive limitations would not be compatible with the naturalistic evolutionary paradigm.”
Nate’s post was spurred by a theist’s claim that our advanced cognitive abilities, such as “philosophical insight, scientific acumen, or mathematical skills” defy natural explanation. I responded by suggesting that the converse seems more accurate.
We have become increasingly aware of our cognitive limitations as we have applied scientific methods to observation of human behavior, revealing a pervasive susceptibility to error through inherent biases and external influences (see Kahneman’s ‘Thinking Fast and Slow’ for a nice introduction). In fact, the scientific endeavor itself is a process for minimizing those errors. I outlined my own criteria for discernment (Part 1, Part 2) a few years ago when I realized that it was an integral and necessary part of any truth-seeking journey.
But this goes beyond errors in judgment. A substantial body of research is showing just how fragile and malleable our long-term memories actually are. The memories of our past are largely reconstructed. Even our short-term memory is limited to about 7 items. Then there’s also the consideration of those alleged “mathematical skills”. Hasn’t the advent of computers shown us just how slow and error prone our math skills actually are compared to what is possible?
There’s really no telling where we lie on the continuum of intelligence. Yes, relative to other lifeforms on earth we seem to be at the top, but as technological advances continue to give us glimpses into the kind of reliability which may actually be possible you can’t help but feel like we aren’t so close to the pinnacle after all. So, if a designer is trying to reveal himself through the gift of advanced intelligence, then why do these findings make it so easy to imagine a better human who isn’t dependent on tools and processes to mitigate against cognitive error and limitations? The holy books which purport to capture knowledge of supernatural origin also seem to be consistent with a natural origin and betray the humanity of their authors. Where is the evidence of a supernaturally gifted intelligence? It seems more likely that we’re just doing the best we can with the empirically grounded capacities which have aided our survival over the millenia and that we owe nearly all of our advanced knowledge to the cumulative efforts of past generations who have worked hard to pass on their knowledge of “what works” so that we don’t have to rediscover everything.
Counterfactual #3: Natural Moral Consequences
When I saw the most recent post at 500 Questions about God & Christianity I couldn’t resist including it here. The post asks “Why doesn’t sin carry natural consequences?“, which he translates into a counterfactual near the end of the post when he says “If God is truly the creator, and the commands in the Bible are his (and not man’s), then we might expect to see the creator enforcing his rules through his creation, but we don’t (suggesting the laws laid out in the Bible were reasoned by men, and not God).” Or, to put it in the context of questioning biological design as revelation, “If God valued the revelation of moral truth (and thus his moral nature) more than our physical comfort then he would have designed us to discover moral truths in ways that are more efficacious than the way that pain teaches us to avoid physical harm”. Moral disagreement is rampant, yet we all pretty much agree that it’s painful to touch things that are hot or sharp.
If you haven’t already, I highly recommend checking out the 65 other questions. The whole blog is pretty much one giant counterfactual argument.
O man, who art thou that repliest against God?
Shall the thing formed say to him that formed it, Why hast thou made me thus?
– Romans 9:20 (KJV)
At this point you may wish to accuse me of naive arrogance in supposing that I can deduce how God should behave. You are right, but I ask that you hear me out. Certainly, if God exists, I am in no position to tell him how he should act, but this says nothing of how we are to interpret the evidence for his existence. If I wake on Christmas morning to find a set of binoculars under the tree made out of two toilet paper tubes, scotch tape and string, it is entirely reasonable to conclude that it was produced by my children and not by Nikon or Bushnell. Likewise, if God wanted us to infer his presence from the life found in his creation, then it seems he could have done better. If God directed acts of special creation, or the course of evolution, then it would appear that he chose to leave a signature which is indecipherable from what we might get from a lawful yet unguided process. Does this sound like the behavior of somebody who wants us to know him?
This observation offers no definitive conclusions regarding the question of whether a designer lies behind the structure of life and counterfactual arguments are inherently weak due to their speculative nature. What it does do, however, is offer an argument which generally favors either (a) the absence of a designer, (b) a designer who doesn’t really want us to find him through inference to design, or (c) a designer who is incapable of generating the most compelling inference to design. None of these fit with the classical theistic definition of God:
For the invisible things of him from the creation of the world are clearly seen, being understood by the things that are made, even his eternal power and Godhead; so that they are without excuse
– Romans 1:20 (KJV)
Feel free to share any other counterfactual arguments against biological design as revelation, or conversely, to show me the folly of my ways.
Whew, that was close. March……..☑
Does belief in God improve cooperation? In case you haven’t seen it yet, a new study published today in Nature says that the answer is yes (a better summary of the study is available at ScienceNews). The authors go on to suggest that “beliefs in moralistic, punitive and knowing gods increase impartial behaviour towards distant co-religionists, and therefore can contribute to the expansion of prosociality.” In other words, the apologists have been right all along – we can’t be good without God.
That is of course a sensationalized caricature of the study, but this isn’t really a surprising result given the data that we have already collected. For example, it has been well established that even just a subliminal hint that we’re being watched will yield more prosocial behavior. We’ve also seen that priming thoughts about God improves prosociality and that exclusion from a group decreases prosocial behavior. And on a related note, there’s good evidence that a belief in free will also increases prosociality. So the data has been pointing toward this conclusion for some time, but what does it really mean?
The evolution of God?
Another bolus of research indicates that we are innately predisposed to God belief. Where the theist claims this as evidence of God’s fingerprint on our subconscious, the naturalist has responded with theories of agency detection. The research above has nothing to do with detection of agents but may still be relevant to the question of an innate God belief. While the benefit of agency detection certainly makes sense in the “you’re better off running away even if it’s only a tiger 2% of the time” sort of way, there’s still a leap to the theistic conception of an omni-God. Could it be that in the millenia which have ushered in civilization, a sort of natural selection acting on cooperation has bolstered and tweaked that innate predisposition into one which favors an all-seeing, omnipresent God who encourages our cooperation under every circumstance?
Chaos in the absence of belief?
The nones are growing at a rapid clip. Do these findings mean that the rise of an unbelieving society will degenerate into moral chaos? It’s obvious that Pat Robertson and much of conservative Christianity thinks that is the case, but perhaps we can flip the question on its head and ask whether the rise of the nones has in part been facilitated by the replacement of God with something else. Consider the far reaching scope of surveillance, the ubiquiti of mobile audio and video capture and the advances in forensics over the years. In the absence of an immediate deterrent, the odds that we will still be held accountable for our actions has increased dramatically over the last few decades. I wager that this has not escaped our attention. But if we are to take Steven Pinker at his word, we have also become more prosocial over time. So unbelief is on the rise concurrent with a rise in prosociality. The research cited above would predict the opposite result – unless some alternative is taking the place of God. Despite all the concern about our loss of privacy, perhaps Big Brother is just what we need.
What do you think?
2015 was an embarrassingly quiet year on this blog. I spent a lot of time early in the year reading and thinking on moral ontology and after outlining a big multi-part series of posts on the topic I found myself all wrapped up in semantics and disenchanted with the project, so I abandoned it and took a break. More recently the dearth of content has been primarily driven by a need to spend my time on other matters – a condition that will almost certainly persist for the next few months.
Regardless, I want to do better. There are still so many ideas and so much content that beckons. With a profusion of drafts patiently waiting in my queue, I know I can muster up a few hours a month to publish something. So that’s my goal. At least one post per month this year. Even if it means cheating and publishing some pithy article with minimal meaningful content – you know, like the three week late “new year’s resolution” article that you’re currently reading (January……..☑).
In preparation for this, I reviewed my drafts and consulted my Magic 8-ball and put together a sampling of article titles that I would like to try and complete this year. Let me know what you think looks most interesting and maybe I’ll prioritize accordingly. I look forward to many interesting discussions in the coming year. Hopefully.
Oh crap. Did I really just quasi-commit to one post a month?
Wow, it’s been nearly 6 months since my last post. How did that happen? Oh well. I still have a lot of posts in the works, and someday I hope to actually finish and publish them, but for now I just wanted to offer a podcast recommendation.
Phil Harland is a religious studies professor at York University in Toronto and runs a podcast called Religions of the Ancient Mediterranean. I just finished listening to the first seven series and, while much was familiar, it was very educational and a great way to experience a scholarly yet secular view on Christian origins and related topics. I highly recommend adding it to your collection of listening material. Enjoy!
I thought I had something like an epiphany several weeks ago and had finally identified a theory of ethics that I could say was, from my perspective, “most probably true”. I started writing and had drafted outlines for a 9-part series. I wrote, and read, and thought … and then I stopped. I hit a wall. The theory, like every other moral theory ever, was incomplete. There were unexplained assumptions and unanswered questions.
The pseudo-epiphany began with a realization that I had misunderstood the core definition of moral realism, which is
Moral Realism: Moral claims can be true or false and some are true.
(extracted from the Stanford Encyclopedia of Philosophy article)
Despite my interest and reading on the nature of ethics these last couple years, my prior conception of moral realism did not align with the definition above. Through numerous sources and interactions I had been led to define moral realism as requiring ontological independence – that morality, in a sense, exists on its own in some way (though I should note that the SEP article does add the disclaimer that “some accounts of moral realism see it as involving additional commitments, say to the independence of the moral facts from human thought and practice, or to those facts being objective in some specified way”). I guess that’s what happens when most of your education on ethics comes from sources in the God debate. Regardless, the definition given above is much less restrictive in its application than I had previously conceived and as I pondered this I found that it opened the door to new explanations for our intuitions regarding the truthfulness of moral claims, though I eventually began to doubt that I was really heading toward any kind of solution. Even so, I’m not yet willing to admit defeat, so I’m calling in reinforcements (yeah, that’s you). I have several “open questions” and I would like to solicit your input to help me clarify some things. My first request is for answers to the question “What is a moral claim?”, but before you answer, let me give you something to think about.
First, note that the definition of moral realism assumes that we know what a “moral claim” is and, the more I think about it, the more I question whether we can define “moral claim” without presupposing moral realism. To help illustrate this, I’d like to run through a couple examples. Consider the following two sentences:
- It is wrong to skin a cat.
- It is wrong to turn a screw left to tighten it.
We generally agree that #1 is a moral claim and that #2 is not. Now consider the following:
- It is wrong to turn a screw left to tighten it on a Wednesday.
Now the turning of a screw has become a moral claim. What changed? What is it that makes #1 and #3 moral claims, but not #2? As best I can tell the difference is in the referent of ‘wrong’. Claim #2 is referring to a goal – the outcome of tightening the screw, so ‘wrong’ in this context means that the goal will not be met. What is the referent in #1 and #3? Well, the referent seems to be morality itself – some standard of good and bad that isn’t really definable in any other terms without presupposing the existence of morality itself. That does not, however, mean that morality is thus necessarily independent of everything else. It simply means that our faculties are not equipped to define it by reference to something else. As far as I can tell, this leaves us with some form of moral realism – and it’s worth noting that under the definition given above, relativism is a form of realism. It is just a limitation on the scope of the moral truth.
As far as I can tell, this throws various forms of anti-realism out the window. There may be gray areas where it’s hard to tell whether something is or is not a moral claim, but at the extremes even an anti-realist can identify a moral claim from other types of claims. There must be something that they’re drawing upon to do that. That “something” may reduce to emotions, or some neurochemical state, but that’s still something. It’s real.
What do you think? Am I right about this? Does our ability to distinguish moral claims from other claims require moral realism?
PS: If you’re interested, this theory that I’ve put on ice is somewhere in the vicinity of contractualism with a contract that is based on negotiation between the core value judgements of all parties, rather than rational agreement, where by “core value judgements” I mean something like what we see in Jonathan Haidt’s moral foundations.