Truth has always been especially important to me but I’ve come to realize that I had only valued it superficially. Obviously this is demonstrated by the fact that I never critically examined the underpinnings of my faith, but it is perhaps more apparent in that I never stopped to consider how one might best discover truth in the first place. From what I can tell, I simply defaulted to common sense. If something seemed to consistently meet expectations, then it was probably true, or at least close enough. If people were sharply divided with rational but contradictory views, then the truth was probably found somewhere in the middle. Now that I have begun to unearth my philosophical foundations it has been interesting to see that my approach to truth hasn’t changed much. The difference is that for the first time in my life I can explain what I’m doing. I have carved a path to truth and no topic is off limits.
This post is a continuation from Part 1, where I explored my epistemic roots; that is, how I acquire the evidence that informs my beliefs. Here, I hope to build upon that understanding to see how I might go about discerning truth.
In those difficult conversations where I revealed to others that I felt that I could no longer defend the Christian faith, I discovered what appeared to be a common misconception about the typical way in which we form our beliefs . The statements of others would often infer that I had made a decision; that somehow I had willfully chosen to discount my beliefs. I tried to explain that this transition wasn’t a choice, that it was simply the unintended outcome of seeking truth, but I’m not sure how often I was successful. I think that unless you have experienced it for yourself it is difficult to understand how somebody’s entire worldview can make a dramatic shift without intention.
So what is belief? Philosophers call it a “propositional attitude”. Perhaps this is more clearly understood as “a feeling about the truthfulness of a proposition”. Either way, you can see that there is nothing here about making a choice. Instead, beliefs are described as feeling and attitudes. It is usually the case that we do not think that we can choose how to feel, only how we respond to those feelings. So it is with belief. I can choose to keep my foot on the brake when the light turns green and insist to others that the light is still red but that does not change the fact that I perceived a green light and understood this to be correct. My belief is that the light is green but my actions defy my belief.
Most would agree that our beliefs are primarily by-products of the evidence we encounter. We acquire various forms of data and then with each new piece of information, poof, out pops a belief. Even when presented with a proposition that is entirely new and unique to us, our past experience of similar data leads us into a feeling about the truthfulness of the new proposition. We form these beliefs without even trying. Regardless of how this belief making machine works (a topic for another time), there is still an open question: when is a belief justified by the evidence? That is, when is a belief worth believing?
Justification is the process, or the output, of examining the reasons for a belief. The definition of belief which I gave above included a “feeling of truthfulness”. As I noted when I described intuition, this feeling of truthfulness can exist prior to, and even without, awareness of a robust justification. It is still possible, however, to stop, reflect and attempt to explain the foundations of the belief in question. Going through the process itself might even change your belief. That is the process I examine here.
Working from the foundational reality developed in part 1, which asserts that I am one of many thinkers in an external world, I am inclined to think that there may be no better way to establish the reliability of evidence than through the corroboration of other thinkers. It is the only solution with the potential to overcome subjectivity and expand our data set beyond the narrow slice of the world that we experience. The mechanism of corroboration is clear and simple: compare your description with other descriptions. If they match, then the reliability of the evidence is bolstered. If they don’t match, then the reliability of the evidence is diminished.
Beyond this simple comparison, the value of external corroboration is heavily influenced by the independence of the data. There are many ways by which the division between two or more subjective experiences can be blurred by a set of common preconceptions, motives, conditions, suggestions, et al. Corroboration carries the most weight when it is clear that the observers are not biased into obtaining the same or similar data. For the same reason, corroborative justification is also strengthened by the addition of more corroborators. The strength of corroboration is in its power to overcome subjectivity.
Corroboration offers a solution to the problem of not knowing whether our own experience is reliable, but what are we to do when corroboration is unavailable or insufficient? This is a problem we solve on a daily basis. More often than not, our interactions with the outside world are not corroborated by somebody else. Instead, we regularly assume that the past can be viewed as generally representative of the present. This is how we operate by default. David Hume could find no rational justification for this assumption but I contend that this is not sufficient to discard our dependence on induction. Not only in the now, but also in the past, we have found that prior experience is an effective guide to current and future experience. So induction, by induction, seems reliable. Sound circular? It is, but I have no reason to believe that its circularity renders it useless. It has a proven track record and so I shall pragmatically accept it as a generally useful method of justifying my beliefs.
Even so, my acceptance of induction needs further qualification. A single observation, or a small number of observations, do far less to justify belief than do multiple observations. The weight of induction toward supporting a belief is correlated with the depth and breadth of inductive experiences. This is simply Stats 101; we should try to minimize our sampling error.
It is likely that when we acquire new information we will observe similarities with other information and then use this relationship to build or reinforce beliefs. For example, you may have inferred that the subject of the picture on the right is David Hume; a belief that you almost certainly would not have formed were it not for the resemblance to the picture above. This is perfectly valid, though certainly not always reliable. It is not difficult to imagine cases where the inference from analogy would lead us astray.
As with other forms of justification, the strength of the support that an analogy lends to a belief is dependent on more than just the mere presence of similarities. Key factors also include:
- Frequency: Inferences made between many data sets that share common traits are usually more reliable than inferences made between fewer data sets.
- Congruence: Inferences made between data sets that share many common traits are usually more reliable than inferences made between data sets that share fewer common traits.
- Proximity: Inferences made between experiences that occur close in time and space are usually more reliable than inferences made between experiences that are distant in time and space.
Can intuition itself contribute to the justification for our belief? In my discussion of intuition in part 1, I argued that intuition is predominantly a by-product of our experience. It has been well documented that the reliability of specialized intuition is improved with the accumulation of specialized experience. However, most of us also observe that our expectations are more likely to be met when those expectations arise from a belief that is supported by the evidence at hand. This is not only an introspective conclusion but it has been demonstrated experimentally on numerous occasions. The fallibility of our intuitive behavior relative to behavior which results from slow, methodical reasoning is well established. That said, we also recognize that in the course of reasoning we are sometimes unable to recall the information which has shaped our intuition. It appears that intuition may be able to clue us into something that lies just beyond the grasp of our memory.
So where does this leave us? In my view, yes, intuition can help inform our justification for a belief but we must be extremely cautious in doing so. We must recognize the supremacy of reasoning through readily available evidence and only allow intuition to inform the justification when (a) alternative evidence is lacking, and (b) we recognize that we have a wealth of experience which has shaped our intuition (specialization). Even then, intuition should not supersede or overrule evidence which offers clear and immediate feedback, nor should it be allowed greater influence. Furthermore, when we recognize the shortcomings of our evidence we should also seek to fill the gaps before defaulting to intuition. In the end, intuition is a tool of last resort for the purposes of justification. We successfully rely on intuition throughout the course of our daily lives but justification is not the domain of snap judgements, and that is where intuition is best employed.
Defeaters and Falsification
So far I have only discussed how evidence can be used to support a belief, but that is only half the story. Evidence can also be used to defeat a belief; that is, evidence can be used to show that a particular belief is not reliable. Philosophers seem to particularly enjoy lobbing defeaters back and forth. Defeaters are the missiles in the arms race of ideas. The scientific world has a corresponding notion for defeaters. Karl Popper felt that the problem of induction was insurmountable and needed to be formally addressed. To that end, he introduced falsification, which has become a key tenet of modern scientific inquiry. The premise is actually quite simple: use the evidence to build your theory and then do your damnedest to tear it down. If it survives, then your theory is solid. If it fails, then you need to revise.
Both of these concepts – defeaters and falsification – are very powerful. It takes only one example to tear down or remodel erroneous claims of truth. A belief cannot be justified when a valid defeater stands in the way. We simply cannot overlook and push aside those evidences which clash with our beliefs. Unfortunately, I fear that this happens far too often. Certainly I was not exempt, nor am I still. We are deeply invested in our beliefs and making a change is difficult and painful. But if it is truth that we seek, we must be willing to accept defeat.
Information almost never comes to us in isolation. The data we acquire from books, web sites, videos, etc… all carry far more than a single soundbite. Even in our everyday sensory experience we are bombarded with information from multiple senses covering multiple points in space and time. It turns out that all of this extra information can be useful in assessing the reliability of any one part of the data. When we read an article, or a chapter, or a book, we can form many beliefs, each based on evidence from a small part of the content. The justification of that belief is largely dependent on the reliability of the evidence. The reliability of the evidence can be informed by the reliability of the entirety of the content from which the evidence was extracted. In short, the rule is that data which is coupled to other reliable data is itself more likely to be reliable, and data which is coupled to other unreliable data is itself more likely to be unreliable.
On that note, I’m compelled to consider the role this plays with non-empirical evidence. It seems extremely common for non-empirical data to have contact with empirical data. It may be that it includes claims which can be investigated empirically, or that it directs us toward specific interpretations of information that can be investigated empirically. We simply cannot ignore these points of contact. They are more often than not our best windows into evaluating the reliability of data which cannot be examined by any other means. For an excellent critique of how this applies to Christianity, I encourage you to review “Christian Agnosticism & Touching Earth” at jerichobrisance.com.
Many of our beliefs are formed in large part on information that has been communicated to us by another thinker without us ever having experienced it for ourselves. The fact that this information is devoid of personal experience does not, however, restrict us from evaluating it with the same tools that I have already laid out. The information contained in testimony is subject to the same criteria to which we hold other evidence. Unfortunately, it is not that simple. There is an added wrinkle to contend with.
Whereas external corroboration can help us strip away the layers of subjectivity, testimony adds them on. Furthermore, we often have no way of personally investigating the claims. A cloud of doubt looms large over testimonial evidence. As I see it, there are some key defenses against these shortcomings. First and foremost, we can call on external corroboration to help us peel back the layers of subjectivity. This is perhaps the most important validation we can apply to testimony and is a primary reason why the scientific endeavor is considered so trustworthy. Scientific publications which have not been subjected to peer review are essentially disregarded. Another defense against the subjectivity of testimony is to evaluate both the historical and contextual integrity of the source. The evaluation of the contextual integrity was discussed in the previous section. The evaluation of the historical integrity is just a particular type of induction. It involves simply looking at the track record of the testimonial source and using that to inform the veracity of the new data. If prior testimony from this source has proven reliable then new data is also more likely to be reliable. If prior testimony from this source has proven unreliable then new data is also more likely to be unreliable. These tools, together with all the other methods of justification, can go a long way toward saving testimonial data from the subjective uncertainty that it inevitably bares.
The Absence of Evidence
It is often said that the “absence of evidence is not evidence of absence”. In other words, we cannot prove that something did not exist or occur simply by pointing to the lack of evidence. This can be true, particularly for testimony of historical events, but there are also many situations for which the absence of evidence does count. We can appeal to induction and analogy to define the evidence that we should expect and then see if it exists. For example, if somebody tells me that New York has been demolished by a giant lizard and then I go to New York and see that everything is just the same as it was before, then the absence of destruction serves as a solid defeater for the belief that New York was demolished by a giant lizard.
The problem is that we eventually encounter a point where the claim infers less evidence than we can reasonably acquire with sufficient certainty. When the expected evidence becomes impractical to discover, the absence of evidence loses all power. However, when that threshold is reached we notice that the claim itself has also most likely lost its power because it can no longer offer a justification. To assert that the absence of evidence is not evidence of absence is, more often than not, also an admission that a claim is lacking in evidence.
Though it sounds like a Fox News tagline, my discussion would be incomplete if I did not address the balance of the evidence used in a justification. When our beliefs first form, it is often the case that we have no control over the scope of the evidence that formed those beliefs. If we truly want our beliefs to be justified, however, we need to make an effort to ensure that the evidence is balanced. Even so, we need to distinguish between absolute balance and proportional balance. If our research has uncovered a body of evidence for which 90% favors a particular view, while 10% favors another view, it may be inappropriate to pursue a 50/50 balance by intentionally blinding ourselves to further evidence of the 90% while seeking out only further evidence for the 10%. In this attempt to be fair, we may in fact be injecting an artificial bias. The problem, however, is that we don’t know what we don’t know. We cannot foresee the actual balance of evidence that exists and so we are left with only one solution: gather as much evidence as is practicable from as many diverse sources as is practicable and proceed from there. The measure of practicality will of course vary for each person and situation. The key is only that we make an intentional effort to acknowledge and acquire data from multiple viewpoints. It is the best we can do.
Making the Case
The evidence does not stand on its own. It is rarely sufficient to simply present the evidence as the sole support of a belief. Instead, once we have gathered all the evidence and established its reliability, we need to assemble everything into a coherent explanation. Reasoning is at the core of this process (see part 1 for a more extensive review of logical reasoning and its different forms). We must take the time to reason; to identify and consider the relationships between the evidence, evaluate the relative strength of separate evidences and look for the causal connections between the data. We must be able to tell the story that traces through all of the evidence to arrive at a belief. This is the step that completes the process of justification.
I could say a lot more on this. I could outline the basic structures of argumentation, how to use premises to arrive at a conclusion and I could discuss all the different logical fallacies that we need to avoid; but I’m not going to do that (though I strongly encourage you to review them for yourself if you are unfamiliar). Those are all certainly relevant and important to the process of justification, but it seems to me that they boil down to one idea: an argument turns sour the moment it claims a level of certainty that is not actually supported by the entirety of the evidence. Arguments needs to account for all of the available evidence and weigh the relative strength of the evidence.
The determination of an evidence’s strength is tricky. In laying out the various ways we can assess the reliability of evidence, I intentionally called attention to their contribution toward an increase or decrease in the reliability of a belief. I deliberately avoided language that would imply that justification would lead to absolute certainty. This tells you a little something about my view of truth.
In the previous sections I looked at the ways in which we can justify our beliefs. Now I must confront the relationship between justification and reality. What does it mean for a belief to be accurate? It is usually the case that beliefs are formed because they appear to match reality. Justification is itself our best attempt to correlate beliefs with reality. But how can we ever be certain that our beliefs truly do match reality? To be blunt, it seems that we cannot. When all is reduced, we are ultimately reliant on our own sensory experience (which is both limited and fallible) and on corroboration by other thinkers (where both their sensory experience and the transmission of their thoughts to us are both limited and fallible) to justify our beliefs. Absolute certainty, it would seem, is doomed.
Pragmatism, not post-modernism
What I am suggesting here is not the view that many would associate with post-modernism, a philosophical view in which we are inescapably mired in an uncertain world of subjective truth. Rather, it seems entirely possible to me that there is in fact absolute truth and that we can form beliefs which are thus true. It may not always be easy to justify those beliefs, and new information may alter our beliefs, but that does not mean that we cannot attain truth – it simply means we should openly acknowledge that our current set of beliefs might be wrong and that we should be willing to accept new evidence and new justifications, even if that means our beliefs might change. Truth is experienced as more of a journey than as a destination, even if the destination really does exist.
There is an important distinction between acting as if we cannot hold true beliefs and acting as if we cannot be certain that our beliefs are true. Of course, we are now meandering into the question of how we respond to our beliefs, which is a whole new topic, so I am going to end this part of the discussion with a brief endorsement of pragmatism. It seems that there is little value in dwelling on uncertainty. Rather, value arises from the consequences of the actions we take in response to our beliefs. When our beliefs are sufficiently justified and turn into to actions toward fulfilling an expectation, and that expectation is consistently met, then our belief was successful. I don’t see why we need anything more than that.
As an aside to those who are compelled to raise the problem of quantum indeterminacy against this view, I respond by noting that an expectation need not be deterministic; it is perfectly possible for one to hold an expectation that something behaves in an undetermined (but probabilistic, in this case) way.
The scales of truth
We gather evidence, we form beliefs and we justify those beliefs with explanations of the evidence. If only it were that simple. The variety of evidence and differing interpretative explanations can be overwhelming. As a result, we encounter conflicting beliefs while at the same time acknowledging the value of their justifications. What are we to do? I have now come full circle. In my introductory post I outlined a prescription for my truth seeking journey which involved collecting data and then considering the conclusions about that data from both the Christian worldview and the naturalist worldview. By definition, I have immersed myself in conflict. There is little value in assessing the relative merit of each worldview by examining the areas where they agree. If I have dedicated this journey toward resolving those conflicts which are waring in my mind then I must have some way to deal with this problem.
As I have suggested, it is apparent to me that our beliefs not only have content but also weight. Some beliefs seem especially true, some especially fragile, and many fall somewhere on the spectrum between. It also seems as if these weights are generally proportional to the breadth, depth and quality of the justification. From this I conclude that the best way to resolve conflicting beliefs is to, as best one can, carefully consider each belief on the virtue of its justifications and then “measure” its weight (this measurement is, of course, subjective even though we are trying to be as objective as possible). It is important to note that this is not an unjustified measurement (aka intuition) but rather a measurement which takes full accounting of the entire justification. Once each belief has been weighed, we can then compare the weights against each other and use this to decide which is most probably true. Some comparisons will decisively favor one view over another. Some will send us cautiously in one direction. Some will leave us caught in the middle of a tug-of-war. So be it. As new data comes in we update the justifications, reassess our weights and reevaluate our measure of truth; ad infinitum.
This is my process, my epistemology and my guide to truth.
|The unexamined life is not worth living.
|The truth shall set you free.
– Jesus of Nazareth