Exploring J. S. Mill’s classification of misconceptions (part 1)

The philosopher John Stuart Mill (1806-1873) offers an intriguing system for classifying misconceptions (or ‘fallacies’ as he terms them) that could be useful for teachers in understanding many of the misconceptions and preconceptions that our students hold.

My own thoughts on this issue have been profoundly shaped by the ‘Resources Framework‘ as presented by authors such as Andrea di Sessa, David Hammer, Edward Redish and others. What follows is not a rejection of this approach but rather an exploration of whether Mill’s work offers some relevant insights. My thought is that it quite possibly might; after all, it has happened before . . .

The authors, however, did not use or refer to Mill’s system of logic in developing the programs or in formulating their theory of instruction. They didn’t discover parallels between their theory of instruction and Mill’s logic until after they had finished writing the bulk of ‘Theory of Instruction’. The discovery occurred when they were writing a chapter on theoretical issues. In their search for literature relevant to their philosophical orientation, they came across Mill’s work and were shocked to discover that they had independently identified all the major patterns that Mill had articulated. ‘Theory of Instruction’ (1982) even had parallel principles to the methods in ‘A System of Logic’ (1843)

Engelmann and Carnine 2013: Chapter 2

Mill’s system for classifying fallacies

In A System of Logic (1843), Mill argues that

Indifference to truth can not, in and by itself, produce erroneous belief; it operates by preventing the mind from collecting the proper evidences, or from applying to them the test of a legitimate and rigid induction; by which omission it is exposed unprotected to the influence of any species of apparent evidence which offers itself spontaneously, or which is elicited by that smaller quantity of trouble which the mind may be willing to take.

Mill 1843: Book V Chap 1

Mill is saying that we don’t believe false things because we want to, but because there are mechanisms preventing our minds from duly noting and weighing the myriad evidences from which we construct our beliefs about the world by the process of induction.

He suggests that there are five major classes of fallacies:

  • A priori fallacies;
  • Fallacies of observation;
  • Fallacies of generalisation;
  • Fallacies of ratiocination; and
  • Fallacies of confusion

Erroneous arguments do not admit of such a sharply cut division as valid arguments do. An argument fully stated, with all its steps distinctly set out, in language not susceptible of misunderstanding, must, if it be erroneous, be so in some one of these five modes unequivocally; or indeed of the first four, since the fifth, on such a supposition, would vanish. But it is not in the nature of bad reasoning to express itself thus unambiguously.

Mill 1843: Book V Chap 1

Mill is saying that invalid inferences, by their very nature, are ‘messier’ and harder to classify than correct inferences. However, they must all fit into the five categories outlined above. Actually, they are more likely to fit into the first four categories since clear and unambiguous use of language and terms would tend to eliminate fallacies of confusion as a matter of course.

What is an a priori fallacy?

In philosophy, a priori means knowledge derived from theoretical deduction rather than from empirical observation or experience.

Mill says that a priori fallacies (which he also calls fallacies of simple observation) are

those in which no actual inference takes place at all; the proposition (it cannot in such cases be called a conclusion) being embraced, not as proved, but as requiring no proof; as a self-evident truth.

Mill 1843: Book V Chap 3

In other words, an a priori fallacy is an idea whose truth is accepted on its face value alone; no evidence or justification of its truth is needed. An example from physics education might be ideas such as ‘heavy objects fall’ or ‘wood floats’. Some students accept these as obvious and self-evident truths: there is no need to consider ideas such as weight and resultant force or density and upthrust because these are ‘brute facts’ about the world that admit of no further explanation. This a case of mislabelling subjective facts as objective facts.

Falling is a location-specific behaviour: objects on Earth will indeed tend to accelerate downwards towards the centre of the Earth: this is a subjective fact which is dependent on the location of the object rather than an objective fact about the behaviour of all objects everywhere (although we could, of course, argue that falling is indeed an objective fact about objects which are subject to the influence of gravitational fields). Equally, floating is not a phenomenon restricted to the interaction between wood and water: many woods will sink in low density oils. ‘Wood floats‘ is not an objective fact about the universe but rather a subjective fact about the interaction of wood with a certain liquid.

This may be why some students are incurious about certain phenomena because they regard them as trivial and obvious rather than manifestations of the inner workings of the universe.

Mill lists many other examples of the a priori fallacy, but his examples are drawn from the history of science and philosophy, and so are of less direct relevance to the science classroom, with the possible exception of the two following examples:

Humans tend to default to the assumption that any phenomenon must necessarily have only a single cause; in other words, we assume that a multiplicity of causes is impossible. We are protected from this version of the a priori fallacy by the guard rail of the scientific method. For a complete understanding of a phenomenon, we look at the effect of one independent variable at a time whilst controlling other possible variables.

There remains one a priori fallacy or natural prejudice, the most deeply-rooted, perhaps, of all which we have enumerated; one which not only reigned supreme in the ancient world, but still possesses almost undisputed dominion over many of the most cultivated minds … This is, that the conditions of a phenomenon must, or at least probably will, resemble the phenomenon itself … the natural prejudice which led people to assimilate the action of bodies upon our senses, and through them upon our minds, to the transfer of a given form from one object to another by actual moulding.

Mill 1843: Book V Chap 3

I think that this tendency might be the one in play with the difficulties that many students have with understanding how images are formed: they think that an image is an evanescent ‘clone’ of the object that is being imaged rather than being an artefact of the light rays reflected or emitted from the object. This also might help explain why students find explaining the colour changes produced by looking at an object through a colour filter or illuminating it with coloured light difficult: they assume that colour is an essential unalterable property that adheres to the object and cannot be changed without changing the object.

We’ll continue this exploration of Mill’s classification of misconceptions in later posts.

References

Engelmann, S., & Carnine, D. (2013). Could John Stuart Mill Have Saved Our Schools? Attainment Company, Inc.

Mill, J. S. (1843). A System of Logic. Collected Works.

Ut tensio sic vis

And if two men ride of a horse, one must ride behind.

Shakespeare, Much Ado About Nothing

Sir Isaac Newton stands in popular estimation as the foremost intellect of his age; or perhaps, of any age. If a person is never truly dead while their name is spoken, then Sir Isaac stands with us still: partially overshadowed by Einstein at the dawn of the twentieth century, maybe, but never totally eclipsed.

But in the roiling intellectual cauldron of the Age of Enlightenment, even such a venerable polymath as Newton had some serious competition. As Newton himself modestly observed in a letter to a contemporary in 1676: “If I have seen a little further it is by standing on the shoulders of Giants.”

Except that one interpretation has it that the letter was not intended to be modest, but was rather a combative dig at the man to whom it was addressed: Robert Hooke, a man of but “middling” stature and, as a result of a childhood illness, also a hunchback. Not one of the “Giants” with broad philosophic shoulders to whom Newton felt indebted to, then.

Robert Hooke, as painted by Rita Greer in 2007. The painting is based on contemporary descriptions of Robert Hooke. No undisputed contemporary paintings or likenesses of Hooke have survived, possibly because of malicious intent on the part of Newton.

In popular estimation, therefore, it would appear that Hooke is fated always to sit behind Newton. At GCSE and A-level, students learn of Newton’s Laws of Motion, the eponymous unit of force, and his Law of Universal Gravitation.

And what do they learn of Hooke? They learn of his work on springs. They learn of “Hooke’s Law”: that is, the force exerted by a spring is directly proportional to its extension.

Ut tensio, sic vis.

[As extension, so is the force.]

— Robert Hooke, Lectures de Potentia Restituvia [1678]

Newton has all the laws of motion on Earth and in Heaven in the palm of his hand, and Hooke has springs. Perhaps, then, Hooke deserves to be forever second on the horse of eternal fame?

But look closer. To what objects or classes of object can we apply Hooke’s Law? The answer is: all of them.

Hooke’s Law applies to everything solid: muscle, bone, sinew, concrete, wood, ice, crystal and stone. Stretch them or squash them, and they will deform in exact proportion to the size of the force applied to them.

That is, if one power stretch or bend it one space, two will bend it two, and three will bend it three, and so forward.

The major point being that Hooke’s Law is as universal as gravity: it is baked into the very fabric of the universe: it is a direct consequence of the interactions between atoms.

Graph of interatomic force against distance between two atoms. Hooke’s Law applies in the red circle.

Now before I wax too lyrical, it must be pointed out that Hooke’s Law is a first-order linear approximation: it fails when the deforming force increases beyond a certain limit, and that limit is unique to each material. But within the limits of its domain indicated by the red circle above, it reigns supreme.

How do you calculate how much a steel beam will bow when a kitten walks across it? Hooke’s Law. How could we model the stresses on the bones of a galloping dinosaur? Hooke’s Law. How can we calculate how much Mount Everest bends when it is buffeted by wind? Hooke’s Law.

Time to re-evaluate the seating order on Shakespeare’s horse, mayhap?

The Life and Death of Stars

Stars, so far as we understand them today, are not “alive”.

Now and again we saw a binary and a third star approach one another so closely that one or other of the group reached out a filament of its substance toward its partner. Straining our supernatural vision, we saw these filaments break and condense into planets. And we were awed by the infinitesimal size and the rarity of these seeds of life among the lifeless host of the stars. But the stars themselves gave an irresistible impression of vitality. Strange that the movements of these merely physical things, these mere fire-balls, whirling and traveling according to the geometrical laws of their minutest particles, should seem so vital, so questing.

Olaf Stapledon, Star Maker (1937)

Star Maker Cover

And yet, it still makes sense to speak of a star being “born”, “living” and even “dying”.

We have moved on from Stapledon’s poetic description of the formation of planets from a filament of star-stuff gravitationally teased-out by a near-miss between passing celestial orbs. This was known as the “Tidal Hypothesis” and was first put forward by Sir James Jeans in 1917. It implied that planets circling stars would be an incredibly rare occurrence.

Today, it would seem that the reverse is true: modern astronomy tells us that planets almost inevitably form as a nebula collapses to form a star. It appears that stars with planetary systems are the norm, rather than the exception.

Be that as it may, the purpose of this post is to share a way of teaching the “life cycle” of a star that I have found useful, and that many students seem to appreciate. It uses the old trick of using analogy to “couch abstract concepts in concrete terms” (Steven Pinker’s phrase).

Screen Shot 2018-06-24 at 16.49.15.png

I find it humbling to consider that currently there are no black dwarf stars anywhere in the observable universe, simply because the universe isn’t old enough. The universe is merely 13.7 billion years old. Not until the universe is some 70 000 times its current age (about 1015 years old) will enough time have elapsed for even our oldest white dwarfs to have cooled to become a black dwarf. If we take the entire current age of the universe to be one second past midnight on a single 24-hour day, then the first black dwarfs will come into existence at 8 pm in the evening…

And finally, although to the best of our knowledge, stars are in no meaningful sense “alive”, I cannot help but close with a few words from Stapledon’s riotous and romantic imaginative tour de force that is yet threaded through with the disciplined sinews of Stapledon’s understanding of the science of his day:

Stars are best regarded as living organisms, but organisms which are physiologically and psychologically of a very peculiar kind. The outer and middle layers of a mature star apparently consist of “tissues” woven of currents of incandescent gases. These gaseous tissues live and maintain the stellar consciousness by intercepting part of the immense flood of energy that wells from the congested and furiously active interior of the star. The innermost of the vital layers must be a kind of digestive apparatus which transmutes the crude radiation into forms required for the maintenance of the star’s life. Outside this digestive area lies some sort of coordinating layer, which may be thought of as the star’s brain. The outermost layers, including the corona, respond to the excessively faint stimuli of the star’s cosmical environment, to light from neighbouring stars, to cosmic rays, to the impact of meteors, to tidal stresses caused by the gravitational influence of planets or of other stars. These influences could not, of course, produce any clear impression but for a strange tissue of gaseous sense organs, which discriminate between them in respect of quality and direction, and transmit information to the correlating “brain” layer.

Olaf Stapledon, Star Maker (1937)

Corinne’s Shibboleth and Embodied Cognition

You can watch a bird fly by and not even hear the stuff gurgling in its stomach. How can you be so dead?

— R. A. Lafferty, Through Other Eyes

In modern usage, a shibboleth is a custom, tradition or speech pattern that can be used to distinguish one group of people from another.

The literal meaning of the original Hebrew word shibbólet is an ear of corn. However, in about 1200 BCE, the word was used by the victorious Gileadites to identify the defeated Ephraimites as they attempted to cross the river Jordan. The Ephraimites could not pronounce the “sh” sound and thus said “sibboleth” instead of “shibboleth”.

As the King James Bible puts it:

And the Gileadites took the passages of Jordan before the Ephraimites: and it was so, that when those Ephraimites which were escaped said, Let me go over; that the men of Gilead said unto him, Art thou an Ephraimite? If he said, Nay; Then said they unto him, Say now Shibboleth: and he said Sibboleth: for he could not frame to pronounce it right.

Judges 12:5-6

The same story is featured in the irresistible (but slightly weird) Brick Testament through the more prosaic medium of Lego:

Shibboleth

Sadly, the story did not end well for the Ephraimites:

Then they took him, and slew him at the passages of Jordan: and there fell at that time of the Ephraimites forty and two thousand.

This leads us to Corinne’s Shibboleth: a question which, according to Dray and Manogoue 2002, can help us separate physicists from mathematicians, but with fewer deleterious effects for both parties than the original shibboleth.
Corinne’s Shibboleth

Screen Shot 2018-02-16 at 10.39.11.png

Screen Shot 2018-02-16 at 10.40.05.png

Mathematicians answer mainly B. Physicists answer mainly A.

This is because (according to Dray and Manogoue) mathematicians “view functions as maps, taking a given input to a prescribed output. The symbols are just placeholders, with no significance.” However, physicists “view functions as physical quantities. T is the temperature here; it’s a function of location, not of any arbitrary labels used to describe the location.”

Redish and Kuo 2015 comment further on this

[P]hysicists tend to answer that T(r,θ)=kr2 because they interpret x2+ y2 physically as the square of the distance from the origin. If r and θ are the polar coordinates corresponding to the rectangular coordinates x and y, the physicists’ answer yields the same value for the temperature at the same physical point in both representations. In other words, physicists assign meaning to the variables x, y, r, and θ — the geometry of the physical situation relating the variables to one another.

Mathematicians, on the other hand, may regard x, y, r, and θ as dummy variables denoting two arbitrary independent variables. The variables (r, θ) or (x, y) do not have any meaning constraining their relationship.

I agree with the argument put forward by Redish and Kuo that the foundation for understanding Physics is embodied cognition; in other words, that meaning is grounded in our physical experience.

Equations are not always enough. To use R. A Lafferty’s picturesque phraseology, ideally physicists should be able to hear “the stuff gurgling” in the stomach of the universe as it flies by….

Dray, T. & Manogoue, C. (2002). Vector calculus bridge project website, http://www.math.oregonstate.edu/bridge/ideas/functions

Redish, E. F., & Kuo, E. (2015). Language of physics, language of math: Disciplinary culture and dynamic epistemology. Science & Education, 24(5-6), 561-590.

The Pedagog Teaches PRAD

Queen Mary made the doleful prediction that, after her death, you would find the words ‘Philip’ and ‘Calais’ engraved upon heart. In a similar vein, the historians of futurity might observe that, in the early years of the 21st century, the dread letters “R.I.” were burned indelibly on the hearts of many of the teachers of Britain.

In a characteristically iconoclastic post, blogger Requires Improvement ruminates on those very same words that he adopted as his nom de guerre: R.I. or “requires improvement”.

He argues convincingly that the Requirement to Improve was, in reality, nothing more than than a Requirement to Conform: the best way to teach had been jolly well sorted out by your elders* and betters and arranged in a comprehensive and canonical checklist. And woe betide you if any single item on this lexicon of pedagogical virtue was left unchecked during a lesson observation!

[*Or “youngers”, in many cases.]

But what were we being asked to confirm to? Requires Improvement writes:

It was (and to an extent, still is) a strange mixture of pedagogies which probably didn’t really please anyone.

It wasn’t (and isn’t) prog; if a lesson has a clear (and teacher-defined) success criterion, it can’t really be progressive. Comparing my experience as a pupil in the 1980’s with that of the pupils I teach now, they are much better trained in what to write to pass exams, and their whole school experience is much more closely managed than mine was. 

Equally, it wasn’t (and isn’t) trad; if the lesson model is about pupil talk, or putting generic skills above learning a canon of content, it can’t really be traditional teaching.

I think that Requires Improvement has hit the nail squarely on the head here. What we were being asked (and in many schools, are still are being asked) to do is teach a weird hybrid Frankenstein’s monster of a pedagogy that combines seemingly random elements of both PRogressive and trADitional pedagogies: PRAD, if you will.

As C. P. Scott said of the word television that no good could come of a word that’s half Latin and half Greek, I feel that no good has come of the PRAD experiment.

While many proponents of PRAD counted themselves kings of infinite pedagogic space, congratulating themselves on combining the best of progressive and traditionalist ideologies, the resulting unhappy chimera in actuality reflected the poverty of mainstream educational thought.

But though our thought seems to possess this unbounded liberty, we shall find, upon a nearer examination, that it is really confined within very narrow limits, and that all this creative power of the mind amounts to no more than the faculty of compounding, transposing, augmenting, or diminishing the materials afforded us by the senses and experience. When we think of a golden mountain, we only join two consistent ideas, gold, and mountain, with which we were formerly acquainted.

— David Hume, An Enquiry Concerning Human Understanding (1748)

Rather than a magical wingèd lion that breathes fire, PRAD is a stubby-winged mishmash that can’t fly, can’t lay golden eggs, and that spends its miserable days hacking up furballs. It is time to put it out of its misery.

The p-prim path to enlightenment…?

The Duke of Wellington was once asked how he defeated Napoleon. He replied: “Napoleon’s plans were made of wire. Mine were made of little bits of string.”

In other words, Napoleon crafted his plans so thay they had a steely, sinewy strength that carried them to completion. Wellington conceded that his plans were more ramshackle, hand-to-mouth affairs. The difference was that if one of of Napoleon’s schemes broke or miscarried, it proved impossible to repair. When Wellington’s plans went awry, he would merely knot two loose bits of string together and carry on regardless.

I believe Andrea diSessa (1988) would argue that much of our knowledge, certainly emergent knowledge, is in the form of “little bits of string” rather than being organised efficiently into grand, coherent schemas.

For example, every human being has a set of conceptions about how the material world works that can be called intuitive physics. If a ball is thrown up in the air, most people can make an accurate prediction about what happens next. But what is the best description of the way in which intuitive physics is organised?

diSessa identifies two possibilities:

The first is an example of what I call “theory theories” and holds that it is productive to think of spontaneously acquired knowledge about the physical world as a theory of roughly the same quality, though differing in content from Newtonian or other theories of the mechanical world [ . . .]

My own view is that . . . intuitive physics is a fragmented collection of ideas, loosely connected and reinforcing, having none of the commitment or systematicity that one attributes to theories.

[p.50]

diSessa calls these fragmented ideas phenomenological primitives, or p-prims for short.

David Hammer (1996) expands on diSessa’s ideas by considering how students explain the Earth’s seasons.

Many students wrongly assume that the Earth is closer to the Sun during summer. Hammer argues that they are relying, not on a misconception about how the elliptical nature of the Earth’s orbit affects the seasons, but rather on a p-prim that closer = stronger.

The p-prims perspective does not attribute a knowledge structure concerning closeness of the earth and sun; it attributes a knowledge structure concerning proximity and intensity, Moreover, the p-prim closer means stronger is not incorrect.

[p.103]

diSessa and Hammer both argue that a misconceptions perspective assumes the existence of a stable cognitive structure where, in fact, there is none. Students may not have thought about the issue previously, and are in the process of framing thoughts and concepts in response to a question or problem. In short, p-prims may well be a better description of evanescent, emergent knowledge.

Hammer points out that the difference between the two perspectives has practical relevance to instruction. Closer means stronger is a p-prim that is correct in a wide range of contexts and is not one we should wish to eliminate.

The art of teaching therefore becomes one of refining rather than replacing students’ ideas. We need to work with students’ existing ideas and knowledge — piecemeal, inarticulate and applied-in-the-wrong-context as they may be.

Let’s get busy with those little bits of conceptual string. After all, what else have we got to work with?

REFERENCES

diSessa, A. (1988). “Knowledge in Pieces”. In Forman, G. and Pufall, P., eds, Constructivism in the Computer Age, New Jersey: Lawrence Erlbaum Publishers

Hammer, D. (1996). “Misconceptions or p-prims” J. Learn Sci 5 97

Still Working Away In Our Silos (Thank Goodness)

If a thing is worth doing, it is worth doing badly.

–G. K. Chesterton, What’s Wrong With The World (1910)

Why are teachers beavering away in their individual silos, each one of us spending hours reinventing each pedagogic wheel, crafting schemes of work and resources for the new GCSEs?

Wouldn’t life be so much easier and better if we simply shared…?

Image from: http://shedart-bcrooks.blogspot.co.uk/2011/01/people-working-with-silo-mentality.html

To which I say: NO!

To be honest, my favourite part of the job is designing, crafting and re-designing resources and teaching approaches. They’re not perfect, of course. I’m reminded of a line from the opening credits of South Park: “All celebrity voices are impersonated . . . poorly.” As Chesterton remarked, if a thing is worth doing, it is worth doing badly.

But the point is, my approaches and resources are a lot less imperfect than they used to be. I flatter myself that, over the years, some of them have become . . . quite good. I believe Michael Stipe once said that in the entire history of the world there were only ever five rock and roll songs; and that REM could play two of them quite well. There’s a parallel in that most teachers have a lesson or two (or three) that they — and they alone — can teach brilliantly.

I often think that, given the right context, most students prefer shabby, bespoke individualism rather than shiny mass-produced perfection.

As teachers, I think we sometimes overestimate the impact that we have on our students. There is no royal road to learning, and neither can all our craft and pedagogic arts construct a conveyor belt either.

As educators, the most we can hope to do is clear a few stones out of the way of our charges as they set out on the rocky path to learning.

In the end, the journey is theirs. Let us wish them well as we watch from our silos . . .

The difficulty of obtaining knowledge is universally confessed [ . . .] to reposite in the intellectual treasury the numberless facts, experiments, apophthegms and positions, which must stand single in the memory, and of which none has any perceptible connexion with the rest, is a task which, though undertaken with ardour and pursued with diligence, must at last be left unfinished by the frailty of our nature.

Samuel Johnson, The Idler, 12 January 1760

Learning Is For The Birds

​Well versed in the expanses
that stretch from earth to stars,
we get lost in the space
from earth up to our skull.

Wislawa Szymborska, To My Friends

What do we mean by learning? To tell the truth, even as a teacher of twenty-five years experience, I am not sure. 

Professor Robert Coe has suggested that learning happens when people have to think hard. In a similar vein, Daniel Willingham contends that knowledge is the residue of thought. Siegfried Engelmann proposes that learning is the capacity to generalise to new examples from previous examples. I have also heard learning defined as a change in the long term memory.

One thing is certain, learning involves some sort of change in the learner’s brain. But what is acknowledged less often is that it doesn’t just happen in human brains.

Contrary to standard social science assumptions, learning is not some pinnacle of evolution attained only recently by humans. All but the simplest animals learn . . . [And some animals execute] complicated sequences of arithmetic, logic, and data storage and retrieval.
— Steven Pinker, How The Mind Works (1997), p.184

An example recounted by Pinker is that of some species of migratory birds that fly thousands of miles at night and use the constellations to find North. Humans do this too when we find the Pole Star.

But with birds it’s surely just instinct, right?

Wrong. This knowledge cannot be genetically “hardwired” into birds as it would soon become obsolete. Currently, a star known as Polaris just happens to be (nearly) directly above the Earth’s North Pole, so that as the Earth rotates on its axis, this star appears to stand still in the sky while the other stars travel on slow circular paths. But it was not always thus.

The Earth’s axis wobbles slowly over a period of twenty six thousand years. This effect is called the precession of the equinoxes. The North Star will change over time, and oftentimes there won’t be star bright enough to see with the naked eye at the North Celestial Pole for there to be “North Star” — just as currently there is no “South Star”.But there will be one in the future, at least temporarily, as the South Celestial Pole describes its slow precessional dance.

Over evolutionary time, a genetically hardwired instinct that pointed birds towards any current North Star or South Star would soon lead them astray in a mere few thousand years or so.

So what do the birds do?

[T]he birds have responded by evolving a special algorithm for learning where the celestial pole is in the night sky. It all happens while they are still in the nest and cannot fly. The nestlings gaze up at the night sky for hours, watching the slow rotation of the constellations. They find the point around which the stars appear to move, and record its position with respect to several nearby constellations. [p.186]

And so there we have it: the ability to learn confers an evolutionary advantage, amongst many others.

The Curse of Zombie-Ofsted

In his wonderful book, The Mismeasure of Man, Stephen Jay Gould writes of the fallacy

of ranking, or our propensity for ordering complex variation as a gradual ascending scale. Metaphors of progress and gradualism have been among the most pervasive in Western thought . . . ranking requires a criterion for assigning all individuals to their proper status in the single series. And what better criterion than an objective number? . . . one number for each individual . . . to rank people in a single series of worthiness, invariably to find that oppressed and disadvantaged groups — races, classes, or sexes — are innately inferior and deserve their status. In short, this book is about the Mismeasure of Man.

Humankind seems to have an inveterate propensity for sorting the sheep from the goats. There seems to be nothing we enjoy more than placing people, races, genders, things and classes in their allocated place on some putative “Great Chain of Being.”

The Great Chain of Being is a hierarchical worldview developed in mediaeval and Renaissance times but originating from Plato and the neoplatonists. In this view, everyone and everything has its place. An eagle is superior to the “worm eating” robin; the lion is superior to the domestic dog or cat; but those furry familiars have warrant to lord it over the wolf and rabbit because of their greater utility to Man.

In other words, according to this view, Man is the paragon of animals, but is himself subject to the authority of angels and Heaven. All shall be well if each being in the Great Chain knows its place and does its allotted duty.

I believe that the Great Chain of Being is an enduring but largely unconscious idea: we notice its presence like a fish notices the presence of water — that is to say, not at all. Our continuing propensity for ranking is a comfortable habit of thought that, regrettably, all of us slip into as easily as a favourite pair of slippers.

The other fallacy identified by Gould in The Mismeasure of Man is that of

reification, or our tendency to convert abstract concepts into entities (from the Latin res, or thing). We recognize the importance of mentality in our lives and wish to characterize it, in part so that we can make the divisions and distinctions among people that our cultural and political systems dictate.

And so it continues. For example, Regional Schools Commissioner Dominic Herrington recently wrote to a school to ask for evidence that at least 80 per cent of teaching at the school “is rated to be good or better”, including in English and maths (Schoolsweek.co.uk 6/11/15) — to my mind, demonstrating the fallacies of both ranking and reification simultaneously.

For goodness sake, not even Ofsted does that anymore!

However, the practice is, I suspect, still common in a large number of schools as part of their appraisal systems i.e. if you don’t get a “1” or a “2” in any one of your lesson observations then you “fail”.

The depressing truth is that even when Ofsted change their collective mind about an issue in response to evidence and reasonable argument (Yay! Go edu-bloggers!), their previous ideas and systems continue onward with almost undiminished energy, seemingly with a life and mind (or non-mind) of their own: Zombie-Ofsted, if you will.

To be fair to Ofsted, they have attempted to lay these walkers to rest by publishing clear and unequivocal guidance about their expectations about such nonsense as “minimal teacher talk” or “every lesson must include group work” and so on, but even such a well meaning stake-through-the-heart has made seemingly little headway against the strong winds of the Great Chain of Being.

Zombie-Ofsted marches, or lurches, ever onward.

image
Zombie-Ofsted marches -- or lurches -- ever onward

Like so much else in the crazy world of education these days, it makes the mind boggle. Or curdle. Or both.

Engelmann and Direct Instruction (Part 3)

I’m going to begin this post by pondering a deep philosophical conundrum (hopefully, you will find some method in my rambling madness as you read on): I want to discuss the meaning of meaning.

image
Image from https://www.flickr.com/photos/christiaan_tonnis/15768628869

Ludwig Wittgenstein begins the Philosophical Investigations (1953), perhaps one of the greatest works of 20th Century philosophy, by quoting Saint Augustine:

When they (my elders) named some object, and accordingly moved towards something, I saw this and I grasped that the thing was called by the sound they uttered when they meant to point it out. Their intention was shewn by their bodily movements . . . I gradually learnt to understand what objects they signified; and after I had trained my mouth to form these signs, I used them to express my own desires.
Confessions (397 CE), I.8

image
Image from https://commons.m.wikimedia.org/wiki/File:Antonio_Rodríguez_-_Saint_Augustine_-_Google_Art_Project.jpg

Wittgenstein uses it to illustrate a simple model of language where words are defined ostensively i.e. by pointing. The method is, arguably, highly effective when we wish to define nouns or proper names. However, Wittgenstein contends, there are problems even here.

If I hold up (say) a pencil and point to it and say pencil out loud, what inference would an observer draw from my action and utterance?

image

They might well infer that the object I was holding up was called a pencil. But is this the only inference that a reasonable observer could legitimately draw?

The answer is a most definite no! The word pencil could, as far as the observer could tell from this single instance, mean any one of the following: object made of wood; writing implement; stick sharpened at one end; piece of wood with a central core made of another material; piece of wood painted silver; object that uses graphite to make marks, thin cylindrical object, object with a circular or hexagonal cross-section . . . and many more.

The important point is that one is not enough. It will take many repeated instances of pointing at a range of different pencil-objects (and perhaps not-pencil-objects too) before we and the observer can be reasonably secure that she has correctly inferred the correct definition of pencil.

If defining even a simple noun is fraught with philosophical difficulties, what hope is there for communicating more complicated concepts?

Siegfried Engelmann suggests that philosopher John Stuart Mill provided a blueprint for instruction when he framed formal rules of inductive inference in A System of Logic (1843). Mill developed these rules to aid scientific investigation, but Engelmann argues strongly for their utility in the field of education and instruction. In particular, they show “how examples could be selected and arranged to form an example set that generates only one inference, the one the teacher intends to teach.” [Could John Stuart Mill Have Saved Our Schools? (2011) Kindle edition, location 216, emphasis added].

Engelmann identifies five principles from Mill that he believes are invaluable to the educator. These, he suggests, will tell the educator:

how to arrange examples so that they rule out inappropriate inferences, how to show the acceptable range of variation in examples, and how to induce understanding of patterns and the possible effects of one pattern on another. [loc 223, emphasis added]

Engelmann considers Mill’s Method of Agreement first. (We will look at the other four principles in later posts.)

Mill states his Method of Agreement as follows:

If two or more instances of the phenomenon under investigation have only one circumstance in common, the circumstance in which alone all the instances agree, is the cause (or effect) of the given phenomenon.
A System of Logic. p.263

Engelmann suggests that with a slight change in language, this can serve as a guiding technical principle that will allow the teacher to compile a set of examples that will unambiguously communicate the required concept to the learner, while minimising the risk that the learner will — Engelmann’s bête noire! — draw an incorrect inference from the example set.

Stated in more causal terms, the teacher will identify some things with the same label or submit them to the same operation. If the examples in the teaching set share only one feature, that single feature can be the only cause of why the teacher treats instances in the same way. [Loc 233]

As an example of an incorrect application of this principle, Engelmann gives the following example set commonly presented when introducing fractions: 1/2, 1/3, and 1/4.

Engelmann argues that while they are all indeed fractions, they share more than one feature and hence violate the Method of Agreement. The incorrect inferences that a student could draw from this set would be: 1) all fractions represent numbers smaller than one; 2) numerators and denominators are always single digits; and 3) all fractions have a numerator of 1.

A better example set (argues Engelmann) would be: 5/3, 1/4, 2/50, 3/5, 10/2, 1/5, 48/2 and 7/2 — although he notes that there are thousands more possible sets that are consistent with the Method of Agreement.

Engelmann comments:

Yet many educators believe that the set limited to 1/2, 1/3, and 1/4 is well conceived. Some states ranging from North Dakota to Virginia even mandate that these fractions should be taught first, even though the set is capable of inducing serious confusion. Possibly the most serious problem that students have in learning higher math is that they don’t understand that some fractions equal one or are more than one. This problem could have been avoided with early instruction that introduced a broad range of fractions. [Loc 261]

For my part, I find Engelmann’s ideas fascinating. He seems to be building a coherent philosophy of education from what I consider to be properly basic, foundational principles, rather than some of the “castles in the air” that I have encountered elsewhere.

I will continue my exploration of Engelmann’s ideas in subsequent posts. You can find Parts 1 and 2 of this series here and here.

The series continues with Part 4 here.