And if two men ride of a horse, one must ride behind.
— Shakespeare, Much Ado About Nothing
Sir Isaac Newton stands in popular estimation as the foremost intellect of his age; or perhaps, of any age. If a person is never truly dead while their name is spoken, then Sir Isaac stands with us still: partially overshadowed by Einstein at the dawn of the twentieth century, maybe, but never totally eclipsed.
But in the roiling intellectual cauldron of the Age of Enlightenment, even such a venerable polymath as Newton had some serious competition. As Newton himself modestly observed in a letter to a contemporary in 1676: “If I have seen a little further it is by standing on the shoulders of Giants.”
Except that one interpretation has it that the letter was not intended to be modest, but was rather a combative dig at the man to whom it was addressed: Robert Hooke, a man of but “middling” stature and, as a result of a childhood illness, also a hunchback. Not one of the “Giants” with broad philosophic shoulders to whom Newton felt indebted to, then.
In popular estimation, therefore, it would appear that Hooke is fated always to sit behind Newton. At GCSE and A-level, students learn of Newton’s Laws of Motion, the eponymous unit of force, and his Law of Universal Gravitation.
And what do they learn of Hooke? They learn of his work on springs. They learn of “Hooke’s Law”: that is, the force exerted by a spring is directly proportional to its extension.
Ut tensio, sic vis.
[As extension, so is the force.]
— Robert Hooke, Lectures de Potentia Restituvia 
Newton has all the laws of motion on Earth and in Heaven in the palm of his hand, and Hooke has springs. Perhaps, then, Hooke deserves to be forever second on the horse of eternal fame?
But look closer. To what objects or classes of object can we apply Hooke’s Law? The answer is: all of them.
Hooke’s Law applies to everything solid: muscle, bone, sinew, concrete, wood, ice, crystal and stone. Stretch them or squash them, and they will deform in exact proportion to the size of the force applied to them.
That is, if one power stretch or bend it one space, two will bend it two, and three will bend it three, and so forward.
The major point being that Hooke’s Law is as universal as gravity: it is baked into the very fabric of the universe: it is a direct consequence of the interactions between atoms.
Now before I wax too lyrical, it must be pointed out that Hooke’s Law is a first-order linear approximation: it fails when the deforming force increases beyond a certain limit, and that limit is unique to each material. But within the limits of its domain indicated by the red circle above, it reigns supreme.
How do you calculate how much a steel beam will bow when a kitten walks across it? Hooke’s Law. How could we model the stresses on the bones of a galloping dinosaur? Hooke’s Law. How can we calculate how much Mount Everest bends when it is buffeted by wind? Hooke’s Law.
Time to re-evaluate the seating order on Shakespeare’s horse, mayhap?
Stars, so far as we understand them today, are not “alive”.
Now and again we saw a binary and a third star approach one another so closely that one or other of the group reached out a filament of its substance toward its partner. Straining our supernatural vision, we saw these filaments break and condense into planets. And we were awed by the infinitesimal size and the rarity of these seeds of life among the lifeless host of the stars. But the stars themselves gave an irresistible impression of vitality. Strange that the movements of these merely physical things, these mere fire-balls, whirling and traveling according to the geometrical laws of their minutest particles, should seem so vital, so questing.
Olaf Stapledon, Star Maker (1937)
And yet, it still makes sense to speak of a star being “born”, “living” and even “dying”.
We have moved on from Stapledon’s poetic description of the formation of planets from a filament of star-stuff gravitationally teased-out by a near-miss between passing celestial orbs. This was known as the “Tidal Hypothesis” and was first put forward by Sir James Jeans in 1917. It implied that planets circling stars would be an incredibly rare occurrence.
Today, it would seem that the reverse is true: modern astronomy tells us that planets almost inevitably form as a nebula collapses to form a star. It appears that stars with planetary systems are the norm, rather than the exception.
Be that as it may, the purpose of this post is to share a way of teaching the “life cycle” of a star that I have found useful, and that many students seem to appreciate. It uses the old trick of using analogy to “couch abstract concepts in concrete terms” (Steven Pinker’s phrase).
I find it humbling to consider that currently there are no black dwarf stars anywhere in the observable universe, simply because the universe isn’t old enough. The universe is merely 13.7 billion years old. Not until the universe is some 70 000 times its current age (about 1015 years old) will enough time have elapsed for even our oldest white dwarfs to have cooled to become a black dwarf. If we take the entire current age of the universe to be one second past midnight on a single 24-hour day, then the first black dwarfs will come into existence at 8 pm in the evening…
And finally, although to the best of our knowledge, stars are in no meaningful sense “alive”, I cannot help but close with a few words from Stapledon’s riotous and romantic imaginative tour de force that is yet threaded through with the disciplined sinews of Stapledon’s understanding of the science of his day:
Stars are best regarded as living organisms, but organisms which are physiologically and psychologically of a very peculiar kind. The outer and middle layers of a mature star apparently consist of “tissues” woven of currents of incandescent gases. These gaseous tissues live and maintain the stellar consciousness by intercepting part of the immense flood of energy that wells from the congested and furiously active interior of the star. The innermost of the vital layers must be a kind of digestive apparatus which transmutes the crude radiation into forms required for the maintenance of the star’s life. Outside this digestive area lies some sort of coordinating layer, which may be thought of as the star’s brain. The outermost layers, including the corona, respond to the excessively faint stimuli of the star’s cosmical environment, to light from neighbouring stars, to cosmic rays, to the impact of meteors, to tidal stresses caused by the gravitational influence of planets or of other stars. These influences could not, of course, produce any clear impression but for a strange tissue of gaseous sense organs, which discriminate between them in respect of quality and direction, and transmit information to the correlating “brain” layer.
You can watch a bird fly by and not even hear the stuff gurgling in its stomach. How can you be so dead?
— R. A. Lafferty, Through Other Eyes
In modern usage, a shibboleth is a custom, tradition or speech pattern that can be used to distinguish one group of people from another.
The literal meaning of the original Hebrew word shibbólet is an ear of corn. However, in about 1200 BCE, the word was used by the victorious Gileadites to identify the defeated Ephraimites as they attempted to cross the river Jordan. The Ephraimites could not pronounce the “sh” sound and thus said “sibboleth” instead of “shibboleth”.
As the King James Bible puts it:
And the Gileadites took the passages of Jordan before the Ephraimites: and it was so, that when those Ephraimites which were escaped said, Let me go over; that the men of Gilead said unto him, Art thou an Ephraimite? If he said, Nay; Then said they unto him, Say now Shibboleth: and he said Sibboleth: for he could not frame to pronounce it right.
The same story is featured in the irresistible (but slightly weird) Brick Testament through the more prosaic medium of Lego:
Sadly, the story did not end well for the Ephraimites:
Then they took him, and slew him at the passages of Jordan: and there fell at that time of the Ephraimites forty and two thousand.
This leads us to Corinne’s Shibboleth: a question which, according to Dray and Manogoue 2002, can help us separate physicists from mathematicians, but with fewer deleterious effects for both parties than the original shibboleth. Corinne’s Shibboleth
Mathematicians answer mainly B. Physicists answer mainly A.
This is because (according to Dray and Manogoue) mathematicians “view functions as maps, taking a given input to a prescribed output. The symbols are just placeholders, with no significance.” However, physicists “view functions as physical quantities. T is the temperature here; it’s a function of location, not of any arbitrary labels used to describe the location.”
Redish and Kuo 2015 comment further on this
[P]hysicists tend to answer that T(r,θ)=kr2 because they interpret x2+ y2 physically as the square of the distance from the origin. If r and θ are the polar coordinates corresponding to the rectangular coordinates x and y, the physicists’ answer yields the same value for the temperature at the same physical point in both representations. In other words, physicists assign meaning to the variables x, y, r, and θ — the geometry of the physical situation relating the variables to one another.
Mathematicians, on the other hand, may regard x, y, r, and θ as dummy variables denoting two arbitrary independent variables. The variables (r, θ) or (x, y) do not have any meaning constraining their relationship.
I agree with the argument put forward by Redish and Kuo that the foundation for understanding Physics is embodied cognition; in other words, that meaning is grounded in our physical experience.
Equations are not always enough. To use R. A Lafferty’s picturesque phraseology, ideally physicists should be able to hear “the stuff gurgling” in the stomach of the universe as it flies by….
Queen Mary made the doleful prediction that, after her death, you would find the words ‘Philip’ and ‘Calais’ engraved upon heart. In a similar vein, the historians of futurity might observe that, in the early years of the 21st century, the dread letters “R.I.” were burned indelibly on the hearts of many of the teachers of Britain.
In a characteristically iconoclastic post, blogger Requires Improvement ruminates on those very same words that he adopted as his nom de guerre: R.I. or “requires improvement”.
He argues convincingly that the Requirement to Improve was, in reality, nothing more than than a Requirement to Conform: the best way to teach had been jolly well sorted out by your elders* and betters and arranged in a comprehensive and canonical checklist. And woe betide you if any single item on this lexicon of pedagogical virtue was left unchecked during a lesson observation!
[*Or “youngers”, in many cases.]
But what were we being asked to confirm to? Requires Improvement writes:
It was (and to an extent, still is) a strange mixture of pedagogies which probably didn’t really please anyone.
It wasn’t (and isn’t) prog; if a lesson has a clear (and teacher-defined) success criterion, it can’t really be progressive. Comparing my experience as a pupil in the 1980’s with that of the pupils I teach now, they are much better trained in what to write to pass exams, and their whole school experience is much more closely managed than mine was.
Equally, it wasn’t (and isn’t) trad; if the lesson model is about pupil talk, or putting generic skills above learning a canon of content, it can’t really be traditional teaching.
I think that Requires Improvement has hit the nail squarely on the head here. What we were being asked (and in many schools, are still are being asked) to do is teach a weird hybrid Frankenstein’s monster of a pedagogy that combines seemingly random elements of both PRogressive and trADitional pedagogies: PRAD, if you will.
As C. P. Scott said of the word television that no good could come of a word that’s half Latin and half Greek, I feel that no good has come of the PRAD experiment.
While many proponents of PRAD counted themselves kings of infinite pedagogic space, congratulating themselves on combining the best of progressive and traditionalist ideologies, the resulting unhappy chimera in actuality reflected the poverty of mainstream educational thought.
But though our thought seems to possess this unbounded liberty, we shall find, upon a nearer examination, that it is really confined within very narrow limits, and that all this creative power of the mind amounts to no more than the faculty of compounding, transposing, augmenting, or diminishing the materials afforded us by the senses and experience. When we think of a golden mountain, we only join two consistent ideas, gold, and mountain, with which we were formerly acquainted.
— David Hume, An Enquiry Concerning Human Understanding (1748)
Rather than a magical wingèd lion that breathes fire, PRAD is a stubby-winged mishmash that can’t fly, can’t lay golden eggs, and that spends its miserable days hacking up furballs. It is time to put it out of its misery.
The Duke of Wellington was once asked how he defeated Napoleon. He replied: “Napoleon’s plans were made of wire. Mine were made of little bits of string.”
In other words, Napoleon crafted his plans so thay they had a steely, sinewy strength that carried them to completion. Wellington conceded that his plans were more ramshackle, hand-to-mouth affairs. The difference was that if one of of Napoleon’s schemes broke or miscarried, it proved impossible to repair. When Wellington’s plans went awry, he would merely knot two loose bits of string together and carry on regardless.
I believe Andrea diSessa (1988) would argue that much of our knowledge, certainly emergent knowledge, is in the form of “little bits of string” rather than being organised efficiently into grand, coherent schemas.
For example, every human being has a set of conceptions about how the material world works that can be called intuitive physics. If a ball is thrown up in the air, most people can make an accurate prediction about what happens next. But what is the best description of the way in which intuitive physics is organised?
diSessa identifies two possibilities:
The first is an example of what I call “theory theories” and holds that it is productive to think of spontaneously acquired knowledge about the physical world as a theory of roughly the same quality, though differing in content from Newtonian or other theories of the mechanical world [ . . .]
My own view is that . . . intuitive physics is a fragmented collection of ideas, loosely connected and reinforcing, having none of the commitment or systematicity that one attributes to theories.
diSessa calls these fragmented ideas phenomenological primitives, or p-prims for short.
David Hammer (1996) expands on diSessa’s ideas by considering how students explain the Earth’s seasons.
Many students wrongly assume that the Earth is closer to the Sun during summer. Hammer argues that they are relying, not on a misconception about how the elliptical nature of the Earth’s orbit affects the seasons, but rather on a p-prim that closer = stronger.
The p-prims perspective does not attribute a knowledge structure concerning closeness of the earth and sun; it attributes a knowledge structure concerning proximity and intensity, Moreover, the p-prim closer means stronger is not incorrect.
diSessa and Hammer both argue that a misconceptions perspective assumes the existence of a stable cognitive structure where, in fact, there is none. Students may not have thought about the issue previously, and are in the process of framing thoughts and concepts in response to a question or problem. In short, p-prims may well be a better description of evanescent, emergent knowledge.
Hammer points out that the difference between the two perspectives has practical relevance to instruction. Closer means stronger is a p-prim that is correct in a wide range of contexts and is not one we should wish to eliminate.
The art of teaching therefore becomes one of refining rather than replacing students’ ideas. We need to work with students’ existing ideas and knowledge — piecemeal, inarticulate and applied-in-the-wrong-context as they may be.
Let’s get busy with those little bits of conceptual string. After all, what else have we got to work with?
diSessa, A. (1988). “Knowledge in Pieces”. In Forman, G. and Pufall, P., eds, Constructivism in the Computer Age, New Jersey: Lawrence Erlbaum Publishers
Hammer, D. (1996). “Misconceptions or p-prims” J. Learn Sci 5 97
If a thing is worth doing, it is worth doing badly.
–G. K. Chesterton, What’s Wrong With The World (1910)
Why are teachers beavering away in their individual silos, each one of us spending hours reinventing each pedagogic wheel, crafting schemes of work and resources for the new GCSEs?
Wouldn’t life be so much easier and better if we simply shared…?
To which I say: NO!
To be honest, my favourite part of the job is designing, crafting and re-designing resources and teaching approaches. They’re not perfect, of course. I’m reminded of a line from the opening credits of South Park: “All celebrity voices are impersonated . . . poorly.” As Chesterton remarked, if a thing is worth doing, it is worth doing badly.
But the point is, my approaches and resources are a lot less imperfect than they used to be. I flatter myself that, over the years, some of them have become . . . quite good. I believe Michael Stipe once said that in the entire history of the world there were only ever five rock and roll songs; and that REM could play two of them quite well. There’s a parallel in that most teachers have a lesson or two (or three) that they — and they alone — can teach brilliantly.
I often think that, given the right context, most students prefer shabby, bespoke individualism rather than shiny mass-produced perfection.
As teachers, I think we sometimes overestimate the impact that we have on our students. There is no royal road to learning, and neither can all our craft and pedagogic arts construct a conveyor belt either.
As educators, the most we can hope to do is clear a few stones out of the way of our charges as they set out on the rocky path to learning.
In the end, the journey is theirs. Let us wish them well as we watch from our silos . . .
The difficulty of obtaining knowledge is universally confessed [ . . .] to reposite in the intellectual treasury the numberless facts, experiments, apophthegms and positions, which must stand single in the memory, and of which none has any perceptible connexion with the rest, is a task which, though undertaken with ardour and pursued with diligence, must at last be left unfinished by the frailty of our nature.
Well versed in the expanses
that stretch from earth to stars,
we get lost in the space
from earth up to our skull.
Wislawa Szymborska, To My Friends
What do we mean by learning? To tell the truth, even as a teacher of twenty-five years experience, I am not sure.
Professor Robert Coe has suggested that learning happens when people have to think hard. In a similar vein, Daniel Willingham contends that knowledge is the residue of thought. Siegfried Engelmann proposes that learning is the capacity to generalise to new examples from previous examples. I have also heard learning defined as a change in the long term memory.
One thing is certain, learning involves some sort of change in the learner’s brain. But what is acknowledged less often is that it doesn’t just happen in human brains.
Contrary to standard social science assumptions, learning is not some pinnacle of evolution attained only recently by humans. All but the simplest animals learn . . . [And some animals execute] complicated sequences of arithmetic, logic, and data storage and retrieval.
— Steven Pinker, How The Mind Works (1997), p.184
An example recounted by Pinker is that of some species of migratory birds that fly thousands of miles at night and use the constellations to find North. Humans do this too when we find the Pole Star.
But with birds it’s surely just instinct, right?
Wrong. This knowledge cannot be genetically “hardwired” into birds as it would soon become obsolete. Currently, a star known as Polaris just happens to be (nearly) directly above the Earth’s North Pole, so that as the Earth rotates on its axis, this star appears to stand still in the sky while the other stars travel on slow circular paths. But it was not always thus.
The Earth’s axis wobbles slowly over a period of twenty six thousand years. This effect is called the precession of the equinoxes. The North Star will change over time, and oftentimes there won’t be star bright enough to see with the naked eye at the North Celestial Pole for there to be “North Star” — just as currently there is no “South Star”.But there will be one in the future, at least temporarily, as the South Celestial Pole describes its slow precessional dance.
Over evolutionary time, a genetically hardwired instinct that pointed birds towards any current North Star or South Star would soon lead them astray in a mere few thousand years or so.
So what do the birds do?
[T]he birds have responded by evolving a special algorithm for learning where the celestial pole is in the night sky. It all happens while they are still in the nest and cannot fly. The nestlings gaze up at the night sky for hours, watching the slow rotation of the constellations. They find the point around which the stars appear to move, and record its position with respect to several nearby constellations. [p.186]
And so there we have it: the ability to learn confers an evolutionary advantage, amongst many others.
In his wonderful book, The Mismeasure of Man, Stephen Jay Gould writes of the fallacy
of ranking, or our propensity for ordering complex variation as a gradual ascending scale. Metaphors of progress and gradualism have been among the most pervasive in Western thought . . . ranking requires a criterion for assigning all individuals to their proper status in the single series. And what better criterion than an objective number? . . . one number for each individual . . . to rank people in a single series of worthiness, invariably to find that oppressed and disadvantaged groups — races, classes, or sexes — are innately inferior and deserve their status. In short, this book is about the Mismeasure of Man.
Humankind seems to have an inveterate propensity for sorting the sheep from the goats. There seems to be nothing we enjoy more than placing people, races, genders, things and classes in their allocated place on some putative “Great Chain of Being.”
The Great Chain of Being is a hierarchical worldview developed in mediaeval and Renaissance times but originating from Plato and the neoplatonists. In this view, everyone and everything has its place. An eagle is superior to the “worm eating” robin; the lion is superior to the domestic dog or cat; but those furry familiars have warrant to lord it over the wolf and rabbit because of their greater utility to Man.
In other words, according to this view, Man is the paragon of animals, but is himself subject to the authority of angels and Heaven. All shall be well if each being in the Great Chain knows its place and does its allotted duty.
I believe that the Great Chain of Being is an enduring but largely unconscious idea: we notice its presence like a fish notices the presence of water — that is to say, not at all. Our continuing propensity for ranking is a comfortable habit of thought that, regrettably, all of us slip into as easily as a favourite pair of slippers.
The other fallacy identified by Gould in The Mismeasure of Man is that of
reification, or our tendency to convert abstract concepts into entities (from the Latin res, or thing). We recognize the importance of mentality in our lives and wish to characterize it, in part so that we can make the divisions and distinctions among people that our cultural and political systems dictate.
And so it continues. For example, Regional Schools Commissioner Dominic Herrington recently wrote to a school to ask for evidence that at least 80 per cent of teaching at the school “is rated to be good or better”, including in English and maths (Schoolsweek.co.uk 6/11/15) — to my mind, demonstrating the fallacies of both ranking and reification simultaneously.
For goodness sake, not even Ofsted does that anymore!
However, the practice is, I suspect, still common in a large number of schools as part of their appraisal systems i.e. if you don’t get a “1” or a “2” in any one of your lesson observations then you “fail”.
The depressing truth is that even when Ofsted change their collective mind about an issue in response to evidence and reasonable argument (Yay! Go edu-bloggers!), their previous ideas and systems continue onward with almost undiminished energy, seemingly with a life and mind (or non-mind) of their own: Zombie-Ofsted, if you will.
To be fair to Ofsted, they have attempted to lay these walkers to rest by publishing clear and unequivocal guidance about their expectations about such nonsense as “minimal teacher talk” or “every lesson must include group work” and so on, but even such a well meaning stake-through-the-heart has made seemingly little headway against the strong winds of the Great Chain of Being.
Zombie-Ofsted marches, or lurches, ever onward.
Like so much else in the crazy world of education these days, it makes the mind boggle. Or curdle. Or both.
I’m going to begin this post by pondering a deep philosophical conundrum (hopefully, you will find some method in my rambling madness as you read on): I want to discuss the meaning of meaning.
Ludwig Wittgenstein begins the Philosophical Investigations (1953), perhaps one of the greatest works of 20th Century philosophy, by quoting Saint Augustine:
When they (my elders) named some object, and accordingly moved towards something, I saw this and I grasped that the thing was called by the sound they uttered when they meant to point it out. Their intention was shewn by their bodily movements . . . I gradually learnt to understand what objects they signified; and after I had trained my mouth to form these signs, I used them to express my own desires.
— Confessions (397 CE), I.8
Wittgenstein uses it to illustrate a simple model of language where words are defined ostensively i.e. by pointing. The method is, arguably, highly effective when we wish to define nouns or proper names. However, Wittgenstein contends, there are problems even here.
If I hold up (say) a pencil and point to it and say pencil out loud, what inference would an observer draw from my action and utterance?
They might well infer that the object I was holding up was called a pencil. But is this the only inference that a reasonable observer could legitimately draw?
The answer is a most definite no! The word pencil could, as far as the observer could tell from this single instance, mean any one of the following: object made of wood; writing implement; stick sharpened at one end; piece of wood with a central core made of another material; piece of wood painted silver; object that uses graphite to make marks, thin cylindrical object, object with a circular or hexagonal cross-section . . . and many more.
The important point is that one is not enough. It will take many repeated instances of pointing at a range of different pencil-objects (and perhaps not-pencil-objects too) before we and the observer can be reasonably secure that she has correctly inferred the correct definition of pencil.
If defining even a simple noun is fraught with philosophical difficulties, what hope is there for communicating more complicated concepts?
Siegfried Engelmann suggests that philosopher John Stuart Mill provided a blueprint for instruction when he framed formal rules of inductive inference in A System of Logic (1843). Mill developed these rules to aid scientific investigation, but Engelmann argues strongly for their utility in the field of education and instruction. In particular, they show “how examples could be selected and arranged to form an example set that generates only one inference, the one the teacher intends to teach.” [Could John Stuart Mill Have Saved Our Schools? (2011) Kindle edition, location 216, emphasis added].
Engelmann identifies five principles from Mill that he believes are invaluable to the educator. These, he suggests, will tell the educator:
how to arrange examples so that they rule out inappropriate inferences, how to show the acceptable range of variation in examples, and how to induce understanding of patterns and the possible effects of one pattern on another. [loc 223, emphasis added]
Engelmann considers Mill’s Method of Agreement first. (We will look at the other four principles in later posts.)
Mill states his Method of Agreement as follows:
If two or more instances of the phenomenon under investigation have only one circumstance in common, the circumstance in which alone all the instances agree, is the cause (or effect) of the given phenomenon.
— A System of Logic. p.263
Engelmann suggests that with a slight change in language, this can serve as a guiding technical principle that will allow the teacher to compile a set of examples that will unambiguously communicate the required concept to the learner, while minimising the risk that the learner will — Engelmann’s bête noire! — draw an incorrect inference from the example set.
Stated in more causal terms, the teacher will identify some things with the same label or submit them to the same operation. If the examples in the teaching set share only one feature, that single feature can be the only cause of why the teacher treats instances in the same way. [Loc 233]
As an example of an incorrect application of this principle, Engelmann gives the following example set commonly presented when introducing fractions: 1/2, 1/3, and 1/4.
Engelmann argues that while they are all indeed fractions, they share more than one feature and hence violate the Method of Agreement. The incorrect inferences that a student could draw from this set would be: 1) all fractions represent numbers smaller than one; 2) numerators and denominators are always single digits; and 3) all fractions have a numerator of 1.
A better example set (argues Engelmann) would be: 5/3, 1/4, 2/50, 3/5, 10/2, 1/5, 48/2 and 7/2 — although he notes that there are thousands more possible sets that are consistent with the Method of Agreement.
Yet many educators believe that the set limited to 1/2, 1/3, and 1/4 is well conceived. Some states ranging from North Dakota to Virginia even mandate that these fractions should be taught first, even though the set is capable of inducing serious confusion. Possibly the most serious problem that students have in learning higher math is that they don’t understand that some fractions equal one or are more than one. This problem could have been avoided with early instruction that introduced a broad range of fractions. [Loc 261]
For my part, I find Engelmann’s ideas fascinating. He seems to be building a coherent philosophy of education from what I consider to be properly basic, foundational principles, rather than some of the “castles in the air” that I have encountered elsewhere.
I will continue my exploration of Engelmann’s ideas in subsequent posts. You can find Parts 1 and 2 of this series here and here.
[B]ecause all my moral and intellectual being is penetrated by an invincible conviction that whatever falls under the dominion of our senses must be in nature and, however exceptional, cannot differ in its essence from all the other effects of the visible and tangible world of which we are a self-conscious part.
— Joseph Conrad, Author’s Note to The Shadow-Line
Anthony Radice writes a provocative blog as The Traditional Teacher: whilst I often agree with much of what he says, sadly our foundational philosophies could not be further apart.
[P]revalent theories are having a disastrous impact on the world of education. Influenced by these theories, there are many nowadays who think that materialism can be justified by statements such as ‘Evidence suggests that ‘conscience’ and ‘consciousness’ and other mental processes are products of human brain activity’.
I wrote the quoted words in the comments of the Traditional Teacher’s previous blog post [21/6/15], and I stand by them still. I would describe myself as a methodological naturalist rather than as a materialist. The label “materialist” calls to mind the seventeenth century view that there is only “atoms and the void”. This is indeed a mechanistic philosophy perhaps best described as ontological naturalism: in other words, all that exists is atoms and the void. If we know the initial states of all the particles then it would seem that we then can predict the future state of the universe at any time. This does indeed suggest that the past, present and future are pre-determined.
However, it soon became clear that such a view could not be justified. Perhaps a two-body Newtonian system can be deterministic in the sense that its past, present and future can be calculated provided enough information about its state at one instant is known. However, the lack of an exact solution to the famous Three Body Problem shows that even mechanistic ontological naturalism does not automatically entail determinism.
Since methodological naturalism does not involve a commitment to an ontology but rather to a methodology (perhaps best exemplified by the empirical sciences, but not limited to them), it does not entail a commitment to any form of determinism either.
I believe the foregoing shows that both “flavours” of naturalism do not automatically lead to determinism. Mr Radice, however, is not impressed:
Indeed, we have reached the stage where many do not hold others responsible for their actions, at least in theory. Their materialistic determinism leads them to ‘explain’ actions in psychological or social or (insert favourite flavour of determinism) terms. But this doesn’t explain anything, because it leaves out the person. It removes humanity because it removes conscience and freedom. All humanity is excused because humanity, it turns out, does not exist.
Sadly, I do not follow his reasoning. If materialism does not entail determinism (as I think I have shown above), then it does not rule out conscience or freedom or humanity. In fact, methodological naturalism leads me to conclude that there is substantial evidential warrant for supposing that they do exist. And this in spite of the fact, as Mr Radice points out, that they “are not material objects subject to laboratory experimentation”. True, but irrelevant — so are many of the entities and concepts dealt with by modern science: virtual photons for example. I believe philosopher Robert T. Pennock puts it well:
Many people continue to think of the scientific world view as being exclusively materialist and deterministic, but if science discovers forces and fields and indeterministic causal processes, then these too are to be accepted as part of the naturalistic worldview . . . An important feature of science is that its conclusions are defeasible on the basis of new evidence, so whatever tentative substantive claims the methodological naturalist makes are always open to revision or abandonment on the basis of new, countervailing evidence. Tower of Babel, pp.90-91
Mr Radice seems to believe that since an individual neuron cannot be conscious, this means that a collection of neurons (a brain, for example) cannot be conscious simply because of the action of neurons:
But this sort of statement doesn’t explain what something is, only how it is manifested in the material realm. It mistakes symptoms for the cause. Understanding is always about finding the cause. What causes the brain activity? A human person with freedom and a conscience.
In his philosophy, neural activity is a product of consciousness rather than vice versa. This is a classic case of the Fallacy of Composition: since A is part of B, and A has property X, therefore B has property X. For example, since a single water molecule is not wet, this means that a collection of water molecules cannot be wet, therefore water is not wet. We only experience the property of wetness when water molecules combine on a large scale. Wetness is an emergent property.
Likewise, consciousness is also an emergent property. As Bo Bennett puts it:
[I]t is difficult to imagine a collection of molecules resulting in something like consciousness, because we are focusing on the properties of the parts (molecules) and not the whole system, which incorporates emergence, motion, the use of energy, temperature (vibration), order, and other relational properties. Logically Fallacious, p.112
Essentially, Mr Radice argues that consciousness is a form of magic with no connection with the empirical universe. Such a viewpoint cannot explain why chemicals such as alcohol and other drugs affect human consciousness, or why brain injuries are demonstrated to cause permanent changes in people’s character.
And one final point:
The Nazis may have been defeated, but their idea that human beings are no more than ‘blood and dirt’ is alive and well, and very fashionable indeed.
Nazi philosophy is not famous for its internal coherence, but the idea that empirical materialism was a major part of their worldview is not borne out by the evidence.
The party as such represents the point of view of a positive Christianity without binding itself to any one particular confession. It fights against the Jewish materialist spirit within and without . . . The leaders of the party undertake to promote the execution of the foregoing points at all costs, if necessary at the sacrifice of their own lives.
The Nazi Party Programme 1920, Article 24