The general purpose of these readings is to establish the following: what the modern secular humanist regards as the sufficiency of scientific knowledge – that everything worth knowing can be known through empirical discovery, and that once God is dead the battle between faith and reason is transcended – is a lousy myth. Faith is not confined to religious belief only; as it turns out, it is not possible to observe the natural world without thinking thoughts that are impossible to confirm through observation. Even secular humanists must have faith – which is to say, non-empirical or metaphysical beliefs. It is just that theirs are not in God.
To believe the hype – that science can do everything – is to become a throwback to the 1600s. Bacon was excited about empiricism, and in his zeal he though ration could win the day. He was wrong – but then again, he has an excuse: he started the party, and moreover it was 1620! Philosophy now knows that empiricism is not self-sufficient. Rational, a priori, metaphysical, “pure” creative conjectural hypothetical thought will always be necessary as a starting point from which to make observations about the world. And neuroscience – as I will argue in separate posts – now knows this in spades, though it never engages the philosophical literature directly. I will argue that it is implicit in the very physical structure of a neuron that an a priori hypothesis about the nature of the external world (whatever “external world” might mean) must be in place for there to be an empirical observation. Or, put more simply, there must be a neuron for there to be an action potential.
The readings below, in sum, justify the large injection of metaphysics – of a priori thinking – into neuroscience that the neuroself model presented on this website entails. Or, put another way, the readings below explain why neuroscience is so riddled with conundrums and confusion about the nature of the mind and the purpose of the brain: not enough metaphysics.
Readers will notice that the readings are arranged (though need not necessarily be read) chronologically. Once upon a time I thought I could afford to read the latest-and-greatest information on any given subject, as though I were buying the latest computer or cellular phone. It took me a long time to understand that all human knowledge is historical, and that to fail to understand the evolution of our ideas was to deprive myself of insight and lead myself into error. In particular, it was to miss the cyclical and fashionable nature of ideas, in which concepts once promoted are later discarded and still later revived. This has happened, for example, with teleology, now called intentionality, and we can expect the debate to continue on indefinitely. But crucially, what we believe today is not the final word and therefore “true.” Unless future generations lose the human motivation to surpass their predecessors, what we believe today will someday seem quaint or naive to our descendants, just as Bacon and Aristotle’s various ideas seem quaint to us today. Taking the historical – and some would say cynical – perspective frees one to travel rather far from mainstream views that, on inspection, are merely a passing fancy, and rather to locate one’s ideas in the broad sweep of history.
The historical narrative is, briefly put, as follows. First, the key questions that all of these philosophers of science are addressing are, as always, the two biggies: ontology and epistemology. Or, in plain English, “what is real, and how can we know?” As you will see if you keep reading, I have in my own thinking merged these questions into a single hybrid question: “what is the most internally consistent way to think about the world?”, in answer to which I have come up with the neuroself framework. But that’s where I ended up. These two questions run like a theme through all of philosophy and science.
As to the ontological question, two answers define the two poles of a single axis along which most philosophers can be arrayed. At the one end is the answer “the mind-independent physical world,” which most scientists – certainly most neuroscientists – assume to be the case. There is a real world “out there” and it would be doing what it is doing whether I looked at it or not. Of importance, this is the intuitive attitude of most adults – though, it should be mentioned, not a single small child under the age of four, who to a person believe that the universe is intentional and in constant purposeful interaction with him. One of the great disappointments of growing up is the realization that the world does not, in fact, care about you.
At the other end is the answer “pure mind.” In this view, there is no physical world – the physical world is merely an idea in the mind. On inspection one will see what Kant saw – that this is the more conservative of the two views. For we do know with certainty that the physical world appears in our minds; we are in intimate and constant contact with phenomena. Whether there is a noumenal world “out there” is certainly plausible, and maybe even likely, but cannot be proven. All we can know is that the descriptions by others of the world – which themselves appear in our minds – match up with our own descriptions of the world, and that this agreement and convergence is circumstantial evidence for a real world. But only that!
And as to the epistemological question, there are again two answers defining two poles along a second axis. On the one end is “pure empiricism,” embraced most absurdly by the behaviorists, in which the only things that can be known are those things that are observed. On the other end is “pure rationalism,” embraced most famously by Descartes but also eastern mystics – Tibetan Buddhists, in my own experience – who believe that all that can be known must be derived by a pure logic independent of observation, which is deceitful. Of note, many secular humanists believe that only the “pure empiricism” pole is legitimate when, in fact, within mainstream academic philosophy it is really only pure empiricism that we can rule, decisively, out. This miscommunication is perhaps the most shocking miscommunication between the academy and the general public in all of modern academics.
These two axes in mind, the readings below are as follows. First we pick up the epistemological question, the reader is introduced to the thinking of Bacon and Hume and Kant, who more or less established firmly in western culture the idea that empiricism and observation were “real conceptual processes.” Next, we meet thinkers who show that this position turns out to have holes that even staunch apologists for science, such as Karl Popper, have seen as so significant they organized their careers around exposing and then fixing them. This then becomes a set-up for introducing the major point of Quine, which is that the distinction between rationalism and empiricism is not real and must be abandoned. It is the abandonment of the dichotomy that, in my view, not only makes room for metaphysics to re-enter the thinking of the modern secular, but actually demands it. It turns out to be a mere ignorant and self-destructive prejudice for the secular to reject metaphysics.
Second, as to the ontological question, I provide no readings, as the epistemological readings should make clear that the ontological question is dependent entirely on how one resolves the first question, and therefore there really is after all only one question: what is the best way to think? As will become clear, it is my belief that the best way to think is that the physical world is “really” just information of a certain form, a form we call physical, but that information is “all there is.” The various names of the various components of the physical world are useful for describing different forms of information, and of these the most remarkable and explanatorily rich form of information in the universe is the electromagnetic field.
a1. Aristotle, The Four Causes, Metaphysics Book 5, Physics Book B, 3 (~350 B.C.E.): A foundational concept for all subsequent philosophy of science, Aristotle suggests that any given event has four possible causes. Modern science believes in only one of these today – mechanism or efficient causality. Aristotle’s mentions are brief: in Metaphysics, Book 5, section 1013a, and – better yet – Physics II (aka B) Book 3, Section 194b24 ff. But remember, these works are essentially lecture notes collected by his students, so he wasn’t aiming for the sort of clarity other authors are. Traditionally understood, the four causes are (from Physics):
- Material cause: “that from which, <as a constituent> present in it, a thing comes to be … e.g., the bronze and silver, and their genera, are causes of the statue and the bowl.”
- Formal cause: “the form, i.e., the pattern … the form is the account of the essence … and the parts of the account.”
- Efficient cause: “the source of the primary principle of change or stability,” e.g., the man who gives advice, the father (of the child). “The producer is a cause of the product, and the initiator of the change is a cause of what is changed.”
- Final cause: “something’s end (telos)—i.e., what it is for—is its cause, as health is <the cause> of walking.”
a2. Francis Bacon, New Organon (1620). In a word, induction. Bacon attempted, quite consciously, to kick-start the empirical revolution that culminated in the technologies that made this blog possible. In 1620 there was hardly even indoor plumbing, let alone electricity. He did this by pitting his intellectual tool – induction, or the observation of the natural world – against the Scholastic’s intellectual tool, the syllogism. He therefore started the war between empiricism and rationalism that, in Hume’s hands, was the distinction between facts and ideas, and in Kant’s was the contrast between synthetic and analytic thought, and which we today know as the difference between inductive and deductive reasoning. The central flaw in his thinking – the belief that inductive inference is possible – remains entrenched in the minds of most philosophically naive scientists and laypersons. Most people believe that observation can lead straight to theory. It is this Baconian misconception that Karl Popper dismantles in the works described below.
a3. David Hume, An Enquiry Concerning Human Understanding (1748). In two words, constant conjunction. The famous essay that blew Kant’s mind and made him really, really upset (to the point where he wrote the Prolemegna onto a piece of paper with his pen). Or, as Kant put it, the essay “woke me from my dogmatic slumbers.” Hume famously divides all the contents of consciousness into ideas or facts (sense impressions)(Sect IV 20), a concept that Kant picked up in dividing thinking into analytical (ideas) and synthetic (facts). He further sets out (IV 20-22) that deduction is the method of reasoning about ideas (again, an idea Kant picked up) while induction from cause to effect is the method for reasoning about facts, although he does not use these modern terms. However his most significant point is that cause and effect cannot be truly known, but rather are merely a case of observing what he calls “constant conjunction” (IV 23). As one can imagine, this thinking gives a huge boost to empiricism, which is the only known way to establish what events are constantly conjoined to others. Of note, comparisons of Hume to Kant over the question of causality makes for productive reading.
a4. Immanuel Kant, Prolegomena to Any Future Metaphysics (1783). In a word, causality is in the relationship of mind to world, not of the world to mind or world itself. The essay is Kant’s reaction to Hume’s Enquiry, and reviewing secondary texts that compare Kant and Hume may be useful. Kant’s main “move” is to say Hume had things backwards. It is not that we observe causality in the world, but that we project it onto the world. In (4, 313; 66) he says “A complete solution of the Humean problem… rescues the a priori origin of the pure concepts of the understanding and the validity of the general laws of nature as laws of the understanding, in such a way that their use is limited only to experience, because their possibility has its ground merely in the relation of the understanding to experience, however, not in such a way that they are derived from experience, but that experience is derived from them, a completely reversed kind of connection which never occurred to Hume.” One can almost hear him whisper, under his breath, “and I believe that’s checkmate.”
a5. Immanuel Kant, Second Edition of Critique of Pure Reason (2nd Version: 1787). In two words, pure reason. Pure reason is what Kant means by his two famous terms in this book, “metaphysics” – thoughts not derived from observation or synthesis – and “a priori” – meaning preceding sensation. Kant is attempting to more or less turn Hume inside out, by responding that rather than causality being the observation of constant conjunction and therefore requiring empiricism, causality is a category of the mind that is utterly beyond empiricism! However Kant also tries to generalize Hume, by saying that not only is causality a priori and therefore pure, but many other concepts are. These are all therefore metaphysical beliefs – they are beyond physics. His famous generalization of his question is this rather indecipherable sentence, which must be taken one term at a time to be understood: “how are synthetic a priori judgments possible?” Synthetic means “put together from observation.” A priori means “existing prior to observation.” That is why the question is apparently nonsensical. Kant’s answer is______________
a6. E.A. Burtt, The Metaphysical Foundations of Modern Physical Science. A Historical and Critical Essay (1924). An examination of how modern empiricism rebelled against scholasticism, and ushered in the modern age of science. A brilliant discussion of how dualistic thinking and a phobia about teleology came to dominate science, as well as a prediction that slowly but surely the world of ‘primary qualities’ will reinterpret the world of ‘seconary qualities,’ leaving the mind and the self and the res cogitans empty.
a7. Karl Popper, Conjectures and Refutations (1953). Reviews in fluid and accessible language the evolution of his own thinking. Suggests that any theory can be applied to any situation if it is not held to the standard of making “risky predictions,” declares that certain events should not occur and then looks for them, and is therefore “falsifiable.” The practice of science ought to be marked by a persistent effort by its practitioners to refute their hypotheses. Freudians and Marxists are example of thinkers who instead try to prove that their theory explains every new case. However – and for the neuroself perspective this is important – he acknowledges that
a8. Karl Popper, The Problem of Induction (1953). In a single sentence, he shows that inductive inference does not exist, only deductive inference does, and therefore theory cannot be produced from empiricism, period. Most modern scientists remain unaware of this, as does the general public. It is assumed that science can know everything; those who know it can’t often don’t know why this is the case. Popper points out crisply that scientific “laws” are based on a limited number of instances, and that science is always but one instance away from proving its laws wrong. I would add that the black swan – once used in Europe as an example of something that never exists, as it violated the law “swans are white” is a case in point: they turned out to exist in Australia (of course), and poof! there went the law. The question then is this: how do we get to laws from always-limited observations?
a9. W.V.O. Quine, Two Dogmas of Empiricism (1951). For those who believe that science (“empirical”) operates by different laws of thought than either math (“deductive”) or the humanities, this convincing and controversial essay is nothing short of mind-blowing. Quine attacks, in an elegant way, Kant’s belief that analytic and synthetic thinking can be segregated from one another. This idea strongly – I would say overwhelmingly – anticipates the insights of neuroscience, which clearly shows that neurally there is no possible distinction between, say, sensation (empiricism) and cognition (rationalism). He also states clearly, right up front, that the implication of his work is that the distinction between metaphysics and science is untenable. A can’t-miss piece.
a10. C.P. Snow, The Two Cultures (1953). A wildly readable essay, Snow outlines the cultural divide between science and letters, in which neither group can understand – or even enjoys thinking much about – the concerns of the other. He briefly mentions that this comes at the cost of missed creative opportunities; for the most part he simply assumes that it is obviously a shame that most intellectuals know only one-half of human knowledge.
a11. Thomas Kuhn, The Structure of Scientific Revolutions. The book that gave us the term “paradigm shift,” Kuhn argues that science progresses by engaging in “normal science” in which the reigning paradigm is upheld mercilessly by the peer culture as “anomalies” accumulate. Eventually so many have accumulated that the edifice becomes unstable. At some point someone proposes a new framework that explains both the anomalies and the facts that remain uncontested. At first this person will be outcast, but with time, if his method has more explanatory power, a critical mass of colleagues will join him, and a “paradigm shift” will have occurred.