Why we wrote ‘Cracking Key Concepts in Secondary Science’

From the Introduction

“We strongly believe that the central part of any science lesson or learning sequence is a well-crafted and executed explanation.

“But we are also aware that many – if not most – teachers have had very little training in how to actually go about crafting or executing their explanations. As advocates of evidence-informed teaching, we hope to bring a new perspective and set of skills to your teaching and empower you to take your place in the classroom as the imparter of knowledge.

“We do, however, wish to put paid to the suspicion that we advocate science lessons to be all chalk and talk: we strongly urge that teachers should use targeted and interactive questioning, model answers, practical work, guided practice and supported individual student practice in tandem with ‘teacher talk’. There is a time when the teacher should be a ‘guide on the side’ but the main focus of this book is to enable you to shine when you are called to be a science ‘sage on the stage’.

[…] “For many years, it seems that teacher explanation has been taken for granted. In a nation-wide focus on pedagogy, activity, student-led learning and social constructivism, the role of the teacher in taking challenging material and explaining it has been de-emphasised, with discovery, enquiry, peer-to-peer tuition and ‘figuring things out for yourself’ becoming ascendant. Not only that, but a significant number of influential organisations and individuals championed the cause of ‘talk-less teaching’ where the teacher was relegated to a near-voiceless ‘guide on the side’, sometimes enforced by observers with a stopwatch and an inflexible ‘teacher talk’ time limit.

“We earnestly hope that such egregious excesses are now a thing of the past; but we must admit that all too often, the mistakes engendered by well-meaning edu-initiatives live on, while whatever good they achieved lies composting with the CPD packs from ancient training days. Even if they are a thing of the past, there has been a collective deskilling when it comes to the crafting of a science explanation – there is little institutional wisdom and few, if any, resources for teachers to use as a reference.”

And that is one reason why we wrote the book.

Viewing waves through the lens of concrete to abstract progression

Many students have a concrete idea of a wave as something ‘wavy’ i.e. something with crests and troughs. However, in a normal teaching sequence we often shift from a wave profile representation to a wavefront representation to a ray diagram representation with little or no explanation — is it any wonder that some students get confused?

I have found it useful to consider the sequence from wave profile to wavefront to ray as representations that move from the concrete and familiar representation of waves as something that looks ‘wavy’ (wave profile) to something that looks less wavy (wavefront) to something more abstract that doesn’t look at all ‘wavy’ (ray diagram) as summarised in the table below.

Each row of the table represents the same situation represented by different conventions and it is important that students recognise this. You can quiz students to check they understand this idea. For example:

  • Top row: which part of the wave do the straight lines in the middle picture represent? (The crests of the waves.)
  • Top row: why are the rays in the last picture parallel? (To show that the waves are not spreading out.)
  • Middle row: compare the viewpoints in the first and middle picture. (The first is ‘from the side’, the middle is ‘from above, looking down.’)
  • Middle row: why are the rays in the last picture not parallel? (Because the waves are spreading out in a circular pattern.)

Once students are familiar with this shift in perspective, we can use to explain more complex phenomena such as refraction.

For example, we begin with the wave profile representation (most concrete and familiar to most students) and highlight the salient features.

Next, we move on to the same situation represented as wavefronts (more abstract).

Finally, we move on to the most abstract ray diagram representation.


‘Cracking Key Concepts in Secondary Science’ is available in multiple formats from Amazon and Sage Publishing. You can also order the paperback and hardback versions direct from your local bookshop 🙂

We hope you enjoy the book and find it useful.

Measuring the radius of the Earth in 240 BC

The brain is wider than the sky,
For, put them side by side,
The one the other will include
With ease, and you beside.

Emily Dickinson, ‘The Brain’

Most science teachers find that ‘Space’ is one of the most enduringly fascinating topics for many students: the sense of wonder engendered as our home planet becomes lost in the empty vastness of the Solar System, which then becomes lost in the trackless star-studded immensity of the Milky Way galaxy, is a joy to behold.

But a common question asked by students is: How do we know all this? How do we know the distance to the nearest star to the Sun is 4 light-years? Or how do we know the distance to the Sun? Or the Moon?

I admit, with embarrassment, that I used to answer with a casual and unintentionally-dismissive ‘Oh well, scientists have measured them!’ which (though true) must have sounded more like a confession of faith rather than a sober recounting of empirical fact. Which, to be fair, it probably was; simply because I had not yet made the effort to find out how these measurements were first taken.

The technological resources available to our ancestors would seem primitive and rudimentary to our eyes but, coupled with the deep well of human ingenuity that I like to think is a hallmark of our species, it proved not just ‘world-beating’ but ‘universe-beating’.

I hope you enjoy this whistle stop tour of this little-visited corner of the scientific hinterland, and choose to share some these stories with your students. It is good to know that the brain is indeed ‘wider than the sky’.

I have presented this in a style and format suitable for sharing and discussing with KS3/KS4 students (11-16 year olds).

Mad dogs and Eratosthenes go out in the midday Sun…

To begin at the beginning: the first reliable measurement of the size of the Earth was made in 240 BC and it all began (at least in this re-telling) with the fact that Eratosthenes liked talking to tourists. (‘Err-at-oss-THen-ees’ with the ‘TH’ said as in ‘thermometer’ — never forget that students of all ages often welcome help in learning how to pronounce unfamiliar words)

Alexandria (in present day Egypt) was a thriving city and a tourist magnet. Eratosthenes made a point of speaking to as many visitors as he could. Their stories, taken with a pinch of salt, were an invaluable source of information about the wider world. Eratosthenes was chief librarian of the Library of Alexandria, regarded as one of the Seven Wonders of the World at the time, and considered it his duty to collect, catalogue and classify as much information as he could.

One visitor, present in Alexandria on the longest day of the year (June 21st by our calendar), mentioned something in passing to Eratosthenes that the Librarian found hard to forget: ‘You know,’ said the visitor, ‘at noon on this day, in my home town there are no shadows.’

How could that be? pondered Eratosthenes. There was only one explanation: the Sun was directly overhead at noon on that day in Syene (the tourist’s home town, now known as Aswan).

The same was not true of Alexandria. At noon, there was a small but noticeable shadow. Eratosthenes measured the angle of the shadow at midday on the longest day. It was seven degrees.

No shadows at Syene, but a 7 degree shadow at Alexandria at the exact same time. Again, there was only one explanation: Alexandria was ’tilted’ by 7 degrees with respect to Syene.

Seven degrees of separation

The sphericity of the Earth had been recognised by astronomers from c. 500 BC so this difference was no surprise to Eratosthenes, but what he realised that since he was comparing the length of shadows at two places on the Earth’s surface at the same time then the 7o wasn’t just the angle of the shadow: 7o was the angle subtended at the centre of the Earth by radial lines drawn from both locations.

Eratosthenes paid a person to pace out the distance between Alexandria and Syene. (This was not such an odd request as it sounds to our ears: in the ancient world there were professionals called bematists who were trained to measure distances by counting their steps.)

It took the bematist nearly a month to walk that distance and it turned out to be 5000 stadia or 780 km by our measurements.

Eratosthenes then used a simple ratio method to calculate the circumference of the Earth, C:

Then:

The modern value for the radius of the Earth is 6371 km.

Ifs and buts…

There is still some debate as to the actual length of one Greek stadium but Eratosthenes’ measurement is generally agreed to within 1-2% of the modern value.

Sadly, none of the copies of the book where Eratosthenes explained his method called On the measure of the earth have survived from antiquity so the version presented here is a simplified one outlined by Cleomedes in a later book. For further details, readers are directed to the excellent Wikipedia article on Eratosthenes.

Astronomer Carl Sagan also memorably explained this method in his 1980 TV documentary series Cosmos.

You might want to read…

This is part of a series exploring how humans ‘measured the size of the sky’:

Part 2: How Aristarchus measured the distance between the Earth and the Moon

Part 3: How Aristarchus measured the distance between the Earth and the Sun

Binding energy: the pool table analogy

Nuclear binding energy and binding energy per nucleon are difficult concepts for A-level physics students to grasp. I have found the ‘pool table analogy’ that follows helpful for students to wrap their heads around these concepts.

Background

Since mass and energy are not independent entities, their separate conservation principles are properly a single one — the principle of conservation of mass-energy. Mass can be created or destroyed , but when this happens, an equivalent amount of energy simultaneously vanishes or comes into being, and vice versa. Mass and energy are different aspects of the same thing.

Beiser 1987: 29

E = mc2

There, I’ve said it. This is the first time I have directly referred to this equation since starting this blog in 2013. I suppose I have been more concerned with the ‘andallthat‘-ness side of things rather than E=mc2. Well, no more! E=mc2 will form the very centre of this post. (And about time too!)

The E is for ‘rest energy’: that is to say, the energy an object of mass m has simply by virtue of being. It is half the energy that would be liberated if it met its antimatter doppelganger and particles and antiparticles annihilated each other. A scientist in a popular novel sternly advised a person witnessing an annihilation event to ‘Shield your eyes!’ because of the flash of electromagnetic radiation that would be produced.

Well, you could if you wanted to, but it wouldn’t do much good since the radiation would be in the form of gamma rays which are to human eyes what the sound waves from a silent dog whistle are to human ears: beyond the frequency range that we can detect.

The main problem is likely to be the amount of energy released since the conversion factor is c2: that is to say, the velocity of light squared. For perspective, it is estimated that the atomic bomb detonated over Hiroshima achieved its devastation by directly converting only 0.0007 kg of matter into energy. (That would be 0.002% of the 38.5 kg of enriched uranium in the bomb.)

Matter contains a lot of energy locked away as ‘rest energy’. But these processes which liberate rest energy are mercifully rare, aren’t they?

No, they’re not. As Arthur Beiser put it in his classic Concepts of Modern Physics:

In fact, processes in which rest energy is liberated are very familiar. It is simply that we do not usually think of them in such terms. In every chemical reaction that evolves energy, a certain amount of matter disappears, but the lost mass is so small a fraction of the total mass of the reacting substances that it is imperceptible. Hence the ‘law’ of conservation of mass in chemistry.

Beiser 1987: 29

Building a helium atom

The constituents of a helium nucleus have a greater mass when separated than they do when they’re joined together.

Here, I’ll prove it to you:

The change in mass due to the loss of energy as the constituents come together is appreciable as a significant fraction of its original mass. Although 0.0293/4.0319*100% = 0.7% may not seem like a lot, it’s enough of a difference to keep the Sun shining.

The loss of energy is called the binding energy and for a helium atom it corresponds to a release of 27 MeV (mega electron volts) or 4.4 x 10-12 joules. Since there are four nucleons (particles that make up a nucleus) then the binding energy per nucleon (which is a guide to the stability of the new nucleus) is some 7 MeV.

But why must systems lose energy in order to become more stable?

The Pool Table Analogy for binding energy

Imagine four balls on a pool table as shown.

The balls have the freedom to move anywhere on the table in their ‘unbound’ configuration.

However, what if they were knocked into the corner pocket?

To enter the ‘bound’ configuration they must lose energy: in the case of the pool balls we are talking about gravitational potential energy, a matter of some 0.30 J per ball or a total energy loss of 4 x 0.30 = 1.2 joules.

The binding energy of a pool table ‘helium nucleus’ is thus some 1.2 joules while the ‘binding energy per nucleon’ is 0.30 J. In other words, we would have to supply 1.2 J of energy to the ‘helium nucleus’ to break the forces binding the particles together so they can move freely apart from each other.

Just as a real helium nucleus, the pool table system becomes more stable when some of its constituents lose energy and less stable when they gain energy.


Reference

Beiser, A. (1987). Concepts of modern physics. McGraw-Hill Companies.

Visualising How Transformers Work

‘Transformers’ is one of the trickier topics to teach for GCSE Physics and GCSE Combined Science.

I am not going to dive into the scientific principles underlying electromagnetic induction here (although you could read this post if you wanted to), but just give a brief overview suitable for a GCSE-level understanding of:

  • The basic principle of a transformer; and
  • How step down and step up transformers work.

One of the PowerPoints I have used for teaching transformers is here. This is best viewed in presenter mode to access the animations.

The basic principle of a transformer

A GIF showing the basic principle of a transformer.
(BTW This can be copied and pasted into a presentation if you wish,)

The primary and secondary coils of a transformer are electrically isolated from each other. There is no charge flow between them.

The coils are also electrically isolated from the core that links them. The material of the core — iron — is chosen not for its electrical properties but rather for its magnetic properties. Iron is roughly 100 times more permeable (or transparent) to magnetic fields than air.

The coils of a transformer are linked, but they are linked magnetically rather than electrically. This is most noticeable when alternating current is supplied to the primary coil (green on the diagram above).

The current flowing in the primary coil sets up a magnetic field as shown by the purple lines on the diagram. Since the current is an alternating current it periodically changes size and direction 50 times per second (in the UK at least; other countries may use different frequencies). This means that the magnetic field also changes size and direction at a frequency of 50 hertz.

The magnetic field lines from the primary coil periodically intersect the secondary coil (red on the diagram). This changes the magnetic flux through the secondary coil and produces an alternating potential difference across its ends. This effect is called electromagnetic induction and was discovered by Michael Faraday in 1831.

Energy is transmitted — magnetically, not electrically — from the primary coil to the secondary coil.

As a matter of fact, a transformer core is carefully engineered so to limit the flow of electrical current. The changing magnetic field can induce circular patterns of current flow (called eddy currents) within the material of the core. These are usually bad news as they heat up the core and make the transformer less efficient. (Eddy currents are good news, however, when they are created in the base of a saucepan on an induction hob.)

Stepping Down

One of the great things about transformers is that they can transform any alternating potential difference. For example, a step down transformer will reduce the potential difference.

A GIF showing the basic principle of a step down transformer.
(BTW This can be copied and pasted into a presentation if you wish,)

The secondary coil (red) has half the number of turns of the primary coil (green). This halves the amount of electromagnetic induction happening which produces a reduced output voltage: you put in 10 V but get out 5 V.

And why would you want to do this? One reason might be to step down the potential difference to a safer level. The output potential difference can be adjusted by altering the ratio of secondary turns to primary turns.

One other reason might be to boost the current output: for a perfectly efficient transformer (a reasonable assumption as their efficiencies are typically 90% or better) the output power will equal the input power. We can calculate this using the familiar P=VI formula (you can call this the ‘pervy equation’ if you wish to make it more memorable for your students).

Thus: Vp Ip = Vs Is so if Vs is reduced then Is must be increased. This is a consequence of the Principle of Conservation of Energy.

Stepping up

A GIF showing the basic principle of a step up transformer.
(BTW This can be copied and pasted into a presentation if you wish,)

There are more turns on the secondary coil (red) than the primary (green) for a step up transformer. This means that there is an increased amount of electromagnetic induction at the secondary leading to an increased output potential difference.

Remember that the universe rarely gives us something for nothing as a result of that damned inconvenient Principle of Conservation of Energy. Since Vp Ip = Vs Is so if the output Vs is increased then Is must be reduced.

If the potential difference is stepped up then the current is stepped down, and vice versa.

Last nail in the coffin of the formula triangle…

Although many have tried, you cannot construct a formula triangle to help students with transformer calculations.

Now is your chance to introduce students to a far more sensible and versatile procedure like FIFA (more details on the PowerPoint linked to above)

A Gnome-inal Value for ‘g’

The Gnome Experiment Kit from precision scale manufacturers Kern and Sohn.

. . . setting storms and billows at defiance, and visiting the remotest parts of the terraqueous globe.

Samuel Johnson, The Rambler, 17 April 1750

That an object in free fall will accelerate towards the centre of our terraqueous globe at a rate of 9.81 metres per second per second is, at best, only a partial and parochial truth. It is 9.81 metres per second per second in the United Kingdom, yes; but the value of both acceleration due to free fall and the gravitational field strength vary from place to place across the globe (and in the SI System of measurement, the two quantities are numerically equal and dimensionally equivalent).

For example, according to Hirt et al. (2013) the lowest value for g on the Earth’s surface is atop Mount Huascarán in Peru where g = 9.7639 m s-2 and the highest is at the surface of the Arctic Ocean where g = 9.8337 m s-2.

Why does g vary?

There are three factors which can affect the local value of g.

Firstly, the distribution of mass within the volume of the Earth. The Earth is not of uniform density and volumes of rock within the crust of especially high or low density could affect g at the surface. The density of the rocks comprising the Earth’s crust varies between 2.6 – 2.9 g/cm3 (according to Jones 2007). This is a variation of 10% but the crust only comprises about 1.6% of the Earth’s mass since the density of material in the mantle and core is far higher so the variation in g due this factor is probably of the order of 0.2%.

Secondly, the Earth is not a perfect sphere but rather an oblate spheroid that bulges at the equator so that the equatorial radius is 6378 km but the polar radius is 6357 km. This is a variation of 0.33% but since the gravitational force is proportional to 1/r2 let’s assume that this accounts for a possible variation of the order of 0.7% in the value of g.

Thirdly, the acceleration due to the rotation of the Earth. We will look in detail at the theory underlying this in a moment, but from our rough and ready calculations above, it would seem that this is the major factor accounting for any variation in g: that is to say, g is a minimum at the equator and a maximum at the poles because of the Earth’s rotation.


The Gnome Experiment

In 2012, precision scale manufacturers Kern and Sohn used this well-known variation in the value of g to embark on a highly successful advertising campaign they called the ‘Gnome Experiment’ (see link 1 and link 2).

Whatever units their lying LCD displays show, electronic scales don’t measure mass or even weight: they actually measure the reaction force the scales exert on the item in their top pan. The reading will be affected if the scales are accelerating.

In diagram A, the apple is not accelerating so the resultant upward force on the apple is exactly 0.981 N. The scales show a reading of 0.981/9.81 = 0.100 000 kg = 100.000 g (assuming, of course, that they are calibrated for use in the UK).

In diagram B, the apple and scales are in an elevator that is accelerating upward at 1.00 metres per second per second. The resultant upward force must therefore be larger than the downward weight as shown in the free body diagram. The scales show a reading of 1.081/9.81 – 0.110 194 kg = 110.194 g.

In diagram C, the the apple and scales are in an elevator that is accelerating downwards at 1.00 metres per second per second. The resultant upward force must therefore be smaller than the downward weight as shown in the free body diagram. The scales show a reading of 0.881/9.81 – 0.089 806 kg = 89.806 g.


Never mind the weight, feel the acceleration

Now let’s look at the situation the Kern gnome mentioned above. The gnome was measured to have a ‘mass’ (or ‘reaction force’ calibrated in grams, really) of 309.82 g at the South Pole.

Showing this situation on a diagram:

Looking at the free body diagram for Kern the Gnome at the equator, we see that his reaction force must be less than his weight in order to produce the required centripetal acceleration towards the centre of the Earth. Assuming the scales are calibrated for the UK this would predict a reading on the scales of 3.029/9.81= 0.30875 kg = 308.75 g.

The actual value recorded at the equator during the Gnome Experiment was 307.86 g, a discrepancy of 0.3% which would suggest a contribution from one or both of the first two factors affecting g as discussed at the beginning of this post.

Although the work of Hirt et al. (2013) may seem the definitive scientific word on the gravitational environment close to the Earth’s surface, there is great value in taking measurements that are perhaps more directly understandable to check our comprehension: and that I think explains the emotional resonance that many felt in response to the Kern Gnome Experiment. There is a role for the ‘artificer’ as well as the ‘philosopher’ in the scientific enterprise on which humanity has embarked, but perhaps Samuel Johnson put it more eloquently:

The philosopher may very justly be delighted with the extent of his views, the artificer with the readiness of his hands; but let the one remember, that, without mechanical performances, refined speculation is an empty dream, and the other, that, without theoretical reasoning, dexterity is little more than a brute instinct.

Samuel Johnson, The Rambler, 17 April 1750

References

Hirt, C., Claessens, S., Fecher, T., Kuhn, M., Pail, R., & Rexer, M. (2013). New ultrahigh‐resolution picture of Earth’s gravity fieldGeophysical research letters40(16), 4279-4283.

Jones, F. (2007). Geophysics Foundations: Physical Properties: Density. University of British Columbia website, accessed on 2/5/21.

<

Mnemonics for the S.I. Prefixes

The S.I. System of Weights and Measures may be a bit of a dog’s dinner, but at least it’s a dog’s dinner prepped, cooked, served and — more to the point — eaten by scientists.

A brief history of the Système international d’unités

It all began with the métre (“measure”), of course. This was first proposed as a universal measure of distance by the post-Revolutionary French Academy of Sciences in 1791. According to legend (well, not legend precisely — think of it as random speculative gossip, if you prefer), they first proposed that the metre should be one millionth of the distance from the North Pole to the equator.

When that turned out to be a little on the large side, they reputedly shrugged in that inimitable Gallic fashion said: “D’accord, faisons un dix millionième alors, mais c’est ma dernière offre.” (“OK, let’s make it one ten millionths then, but that’s my final offer.”)

Since then, what measurement-barbarians loosely (and egregiously incorrectly) call “the metric system” has been through many iterations and revisions to become the S.I. System. Its full name is the Système international d’unités which pays due honour to France’s pivotal role in developing and sustaining it.

When some of those same measurement-barbarians call for a return to the good old “pragmatic” Britsh Imperial System of inches and poundals, I urge all fair-minded people to tell them, as kindly as possible, that they can’t: not now, not ever.

Since 1930, the inch has been defined as 25.4 millimetres. (It was, so I believe, the accuracy and precision needed to design and build jet engines that led to the redefinition. The older definitions of the inch simply weren’t precise enough.)

You simply cannot replace the S.I. system, you can, however, dress it up a little bit and call a distance of 25.4 millimetres “one inch” if you really wanted to — but, in the end, what would be the point of that?

The Power of Three (well, ten to the third power, anyways)

For human convenience, the S.I. system includes prefixes. So a large distance might measured in kilometres where the prefix kilo- indicates multiplying by a factor of 1000 (or 10 raised to the third power). The distance between the Globe Theatre in London and Slough Station is 38.6 km. Longer distances such as London and New York, NY would be 5.6 megametres (or 5.6 Mm — note capital ‘M’ for mega [one million] to avoid confusion with the prefix milli- ).

The S.I. System has prefixes for all occasions, as shown below.

The ‘big’ SI prefixes.
Note that every one of them, except for kilo, is represented by a capital letter.

Note also that one should convert all prefixes into standard units for calculations e.g. meganewtons should be converted to newtons. The sole exception is kilograms because the base unit is the kilogram not the gram, so a megagram should be converted into kilograms, not grams. I trust that’s clear. (Did I mention the “dog’s dinner” part yet?)

For perspective, the distance between Earth and the nearest star outside our Solar System is 40 petametres, and current age of the universe is estimated to be 0.4 exaseconds (give or take a petasecond or two).

A useful mnemonic for remembering these is Karl Marx Gives The Proletariat Eleven Zeppelins (and one can imagine the proletariat expressing their gratitude by chanting in chorus: “Yo! Ta, Mr Marx!” as they march bravely forward.)

Karl Marx Gives The Proletariat Eleven Zeppelins (“Yo! Ta, Mr. Marx!)

But what about the little prefixes?

Milli- we have already covered above. The diameter of one of your red blood cells in 8 micrometres and the time it takes light to travel a distance equal to the diameter of a hydrogen atom is 300 zeptoseconds.

Again, there is an SI prefix for every occasion:

The ‘little’ SI prefixes.
(Handily, all of them are represented by lower case letters — including micro which is shown the lower case Greek letter ‘mu’)

A useful mnemonic would be: Millie’s Microphone Needs a Platform For Auditioning Zebra Yodellers.

For the record, GCSE Physics students are expected to know the SI prefixes between giga- and pico-, but if you’re in for a pico- then you’re in for anything between a yotta- and a yocto- in my opinion (if you catch my drift).

Very, very, very small to very, very, very big

The mean lifetime of a Z-boson (the particle that carries the Weak force) is 0.26 yoctoseconds.

According to our current understanding of physics, the stars will have stopped shining and all the galaxies will dissipate into dissassociated ions some 315 yottaseconds from now.

Apart from that, happy holidays everyone!

Reducing Cognitive Overload in Practicals by graphing with Excel

Confession, they say, is good for the soul. I regret to say that for far too many years as a Science teacher, I was in the habit of simply ‘throwing a practical’ at a class in the belief that it was the best way for students to learn.

However, I now believe that this is not the case. It is another example of the ‘curse of the expert’. As a group, Science teachers are (whether you believe this of yourself and your colleagues or not) a pretty accomplished group of professionals. That is to say, we don’t struggle to use measuring instruments such as measuring cylinders, metre rules (not ‘metre sticks’, please, for the love of all that’s holy), ammeters or voltmeters. Through repeated practice, we have pretty much mastered tasks such as tabulating data, calculating the mean, scaling axes and plotting graphs to the point of automaticity.

But our students have not. The cognitive load of each of the myriad tasks associated with the successful completion of full practical should not be underestimated. For some students, it must seem like we’re asking them to climb Mount Everest while wearing plimsols and completing a cryptic crossword with one arm tied behind their back.

One strategy for managing this cognitive load is Adam Boxer’s excellent Slow Practical method. Another strategy, which can be used in tandem with the Slow Practical method or on its own, is to ‘atomise’ the practical and focus on specific tasks, as Fabio Di Salvo suggests here.

Simplifying Graphs (KS3 and KS4)

If we want to focus on our students’ graph scaling and plotting skills, it is often better to supply the data they are required to plot. If the focus is interpreting the data, then Excel provides an excellent tool for either: a) providing ready scaled axes; or b) completing the plotting process.

Typical exam board guidance states that computer drawn graphs are acceptable provided they are approximately A4 sized and include a ‘fine grid’ similar to that of standard graph paper (say 2 mm by 2 mm) is used.

Excel has the functionality to produce ‘fine grids’ but this can be a little tricky to access, so I have prepared a generic version here: Simple Graphs workbook link.

Data is entered on the DATA1 tab. (BTW if you wish to access the locked non-green cells, go to Review > Unlock sheet)

The data is automatically plotted on the ‘CHART1 (with plots)’ tab.

Please note that I hardly ever use the automatic trendline drawing functionality of Excel as I think students always need practice at drawing a line of best fit from plotted points.

Alternatively, the teacher can hand out a ‘blank’ graph with scaled axes using the ‘CHART1 (without) plots’ tab.

Using the Simple Graph workbook with a class

I have used this successfully with classes in a number of ways:

  • Plotting the data of a demo ‘live’ and printing out a copy of the completed graph for each student.
  • Supplying laptops or tablet so that students can enter their own data ‘live’.
  • Posting the workbook on a VLE so that students can process their own data later or for homework.

Adjusting the Simple Results Graph workbook for different ranges

But what if the data range you wish to enter is vastly different from the generic values I have randomly chosen?

It may look like a disaster, but it can be resolved fairly easily.

Firstly, right click (or ctrl+click on a Mac) on any number on the x-axis. Select ‘Format Axis’ and navigate to the sub-menu that has the ‘Maximum’ and ‘Minimum’ values displayed.

Since my max x data value is 60 I have chosen 70. (BTW clicking on the curved arrow may activate the auto-ranging function.)

I also choose a suitable value of ’10’ for the “Major unit’ which is were the tick marks appear. And I also choose a value of ‘1’ for the minor unit (Generally ‘Major unit’/10 is a good choice)

Next, we right click on any number on the y-axis and select ‘Format Axis’. Going through a similar process for the y-axis yields this:

… which, hopefully, means ‘JOB DONE’

Plotting More Advanced Graphs at KS4 and KS5

The ‘Results Graph (KS4 and KS5)’ workbook (click on link to access and download) will not only calculate the mean of a set of repeats, but will also calculate absolute uncertainties, percentage uncertainties and plot error bars.

Again, I encourage students to manually draw a line of best fit for the data, and (possibly) calculate a gradient and so on.

And finally…

If you find these Excel workbooks useful, please leave a comment on this blog or Tweet a link (please add @emc2andallthat to alert me).

Happy graphing, folks 🙂

Magnetism? THERE IS NO MAGNETISM!!!!

Has a school physics experiment or demonstration ever changed the course of human history?

On 21 April 1820, one such demonstration most definitely did. According to physics lore, Hans Christian Øersted was attempting to demonstrate to his students that, according to the scientific understanding of the day, there was in fact no connection between magnetism and electricity.

To this laudable end, he placed a compass needle near to a wire to show that when the current was switched on, the needle would not be affected.

Except that it was affected. Frequently. Each and every time Øersted switched on the electric current, the needle was deflected from pointing North.

Everybody has heard that wise old saw that ‘If it doesn’t work, it’s physics…” except that in this case ‘It did actually work as it was supposed to but in an unexpected way due to a hitherto-unknown-completely-new-branch-of-physics.’

Øersted, to his eternal credit, did not let it lie there and was a pioneer of the new science of electromagnetism.

Push-me-pull-you: or, two current-carrying conductors

One curious consequence of Øersted’s new science was the realisation that, since electric currents create magnetic fields, two wires carrying electric currents will exert a force on each other.

Let’s consider two long, straight conductors placed parallel to each other as shown.

Screenshot 2020-03-22 at 14.39.37.png

In the diagram above, the magnetic field produced by the current in A is shown by the green lines. Applying Fleming’s Left Hand Rule* to conductor B, we find that a force is produced on B which acts towards conductor A. We could go through a similar process to find the force acting on B, but it’s far easier to apply Newton’s Third Law instead: if body A exerts a force on body B, then body B exerts an equal and opposite force on body A. Hence, conductor A experiences a force which pulls it towards conductor B.

So, two long, straight conductors carrying currents in the same direction will be attracted to each other. By a similar analysis, we find that two long, straight conductors carrying currents in opposite directions will be repelled from each other.

In the past, this phenomenon was used to define the ampere as the unit of current: ‘The ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed 1 m apart in vacuum, would produce between these conductors a force equal to 2×10−7 newton per metre of length.‘ However, the 2019 redefinition of the SI system has ditched this and adopted a new definition in terms of the transfer of the elementary charge, e.

Enter Albert Einstein, pursuing an enigma

What is the connection between magnetism and electricity? It was precisely this puzzle that started Albert Einstein on the road to special relativity. It is one of the unsung triumphs of this theory that it lays bare the connection between magnetism and electricity.

In what follows, we’re going to apply Einstein’s analysis to the situation of two long, straight current-carrying conductors. Acknowledgment: I’m going to following a line of argument laid out in Beiser 1988: 19-22.

It’s gotta be perfect (or ‘idealised’, if you prefer)

Let’s consider two idealised conductors A and B both at rest in the inertial reference frame of the laboratory. The flow of charge in both conductors is made up of positive and negative charge carriers moving in opposite directions with a speed v.

None of the charges in A interact with the other charges in A because we are considering an idealised conductor. However, the charges in A will interact with the charges in B.

Screenshot 2020-03-22 at 16.08.06.png
Two conductors viewed from the inertial frame of the laboratory

Flip the inertial reference frame

Now let’s look at the situation from the inertial reference frame of one of the positive charges in A. For simplicity, we can focus on a single positive charge in A since it does not interact with any of the other charges in A.

With reference to this inertial frame, the positive charge in A is stationary and the positive charges in B are also stationary.

However, the inertial frame of the laboratory is moving right-to-left with a speed v and the negative charges are moving right-to-left with a speed of 2v.

Screenshot 2020-03-22 at 16.13.28.png
The same two conductors viewed from the inertial frame of one of the positive charges in conductor A. Note that all the positive charges are now stationary; the laboratory is moving with speed v right to left, and the negative charges are moving with speed 2v right to left

Since the positive charges in B are stationary with respect to the positive charge in A, the distance between them is the same as it was in the laboratory inertial frame. However, since the negative charges in B are moving with speed 2v with respect to positive charge in A, the spacing between is contracted due to relativistic length contraction (see Lottie and Lorentzian Length Contraction).

Because of this, the negative charge density of B increases since they are closer together. However, the positive charge density of B remains the same since they are stationary relative to the positive charge in A so there is no length contraction.

This means that, as far as the positive charge in A is concerned, conductor B has a net negative charge which means the positive charge experiences an attractive Coulomb’s Law electrical force towards B.

A similar analysis applied to electric currents in opposite directions would show that the positive charge in A would experience a repulsive Coulomb’s Law electrical force. The spacing between the positive charges in B would be contracted but the spacing between the negative charges remains unchanged, so conductor B has a net positive charge because the positive charge density has increased but the negative charge density is unchanged.

Magnetism? THERE IS NO MAGNETISM!!!!

So what we normally think of as a ‘magnetic’ force in the inertial frame of the laboratory can be explained as a consequence of special relativity altering the charge densities in conductors. Although we have just considered a special case, all magnetic phenomena can be interpreted on the basis of Coulomb’s Law, charge invariance** and special relativity.

For the interested reader, Duffin (1980: 388-390) offers a quantitative analysis where he uses a similar argument to derive the expression for the magnetic field due to a long straight conductor.

Update: I’m also indebted to @sbdugdale who points out the there’s a good treatment of this in the Feynman Lectures on Physics, section 13.6.

Notes and references

* Although you could use a non-FLHR catapult field analysis, of course

** ‘A current-carrying conductor that is electrically neutral in one frame of reference might not be neutral in another frame. How can this observation be reconciled with charge invariance? The answer is that we must consider the entire circuit of which the conductor is a part. Because a circuit must be closed for a current to occur in it, for every current element in one direction that a moving observer find to have, say, a positive charge, there must be another current element in the opposite direction which the same observer finds to have a negative charge. Hence, magnetic forces always act between different parts of the same circuit, even though the circuit as a whole appears electrically neutral to all observers.’ Beiser 1988: 21

Beiser, A. (1988). Concepts of modern physics. Tata McGraw-Hill Education

Duffin, W. J. (1980). Electricity and magnetism. McGraw-Hill.

The Acceleration Required Practical Without Light Gates (And Without Tears)

Introduction

The AQA GCSE Required Practical on Acceleration (see pp. 21-22 and pp. 55-57) has proved to be problematic for many teachers, especially those who do not have access to a working set of light gates and data logging equipment.

In version 3.8 of the Practical Handbook (pre-March 2018), AQA advised using the following equipment featuring a linear air track (LAT). The “vacuum cleaner set to blow”, (or more likely, a specialised LAT blower), creates a cushion of air that minimises friction between the glider and track.

Screenshot 2019-08-08 at 14.47.50.png

However, in version 5.0 (dated March 2018) of the handbook, AQA put forward a very different method where schools were advised to video the motion of the car using a smartphone in an effort to obtain precise timings at the 20 cm, 40 cm and other marks.

Screenshot 2019-08-08 at 13.54.07.png

It is possible that AQA published the revised version in response to a number of schools contacting them to say. “We don’t have a linear air track. Or light gates. Or a ‘vacuum cleaner set to blow’.”

The weakness of the “new” version (at least in my opinion) is that it is not quantitative: the method suggested merely records the times at which the toy car passed the lines. Many students may well be able to indirectly deduce the relationship between resultant force and acceleration from this raw timing data; but, to my mind, it would be cognitively less demanding if they were able to compare measurements of resultant force and acceleration instead.

Adapting the AQA method to make it quantitative

Screenshot 2019-08-08 at 15.39.40.png

We simplify the AQA method as above: we simply time how long the toy car takes to complete the whole journey from start to finish.

If a runway of one metre or longer is set up, then the total time for the journey of the toy car will be 20 seconds or so for the smallest accelerating weight: this makes manual timing perfectly feasible.

Important note: the length of the runway will be limited by the height of the bench. As soon as the weight stack hits the floor, the toy car will no longer experience an accelerating force and, while it may continue at a constant speed (more or less!) it will no longer be accelerating. In practice, the best way to sort this out is to pull the toy car back so that the weight stack is just below the pulley and mark this position as the start line; then slowly move the toy car forward until the weight stack is just touching the floor, and mark this position as the finish line. Measure the distance between the two lines and this is the length of your runway.

In addition, the weight stack should feature very small masses; that is to say, if you use 100 g masses then the toy car will accelerate very quickly and manual timing will prove to be impossible. In practice, we found that adding small metal washers to an improvised hook made from a paper clip worked well. We found the average mass of the washers by placing ten of them on a scale.

Then input the data into this spreadsheet (click the link to download from Google Drive) and the software should do the rest (including plotting the graph!).

The Eleventh Commandment: Thou Shalt Not Confound Thy Variables!

To confirm the straight line and directly proportional relationship between accelerating force and acceleration, bear in mind that the total mass of the accelerating system must remain constant in order for it to be a “fair test”.

The parts of our system that are accelerating are the toy car, the string and the weight stack. The total mass of the accelerating system shown below is 461 g (assuming the mass of the hook and the string are negligible).

The accelerating (or resultant) force is the weight of 0.2 g mass on the hook, which0 can be calculated using W = mg and will be equal to 0.00196 N or 1.96 mN.

In the second diagram, we have increased the mass on the weight stack to 0.4 g (and the accelerating force to 0.00392 N or 3.92 mN) but note that the total mass of the accelerating system is still the same at 461 g.

In practice, we found that using blu-tac to stick a matchbox tray to the roof of the car made managing and transferring the weight stack easier.

Personal note: as a beginning teacher, I demonstrated the linear air track version of this experiment to an A-level Physics class and ended up disconfirming Newton’s Second Law instead of confirming it; I was both embarrassed and immensely puzzled until an older, wiser colleague pointed out that the variables had been well and truly confounded by not keeping the total mass of the accelerating system constant.

It was embarrassing and that’s why I always harp on about this aspect of the experiment.

What lies beneath: the Physics underlying this method

This can be considered as “deep background” rather than necessary information, but I, for one, consider it really interesting.

Acceleration is the rate of change of a rate of change. Velocity is the rate of change of displacement with time and acceleration is the rate of change of velocity.

Interested individuals may care to delve into higher derivatives like jerk, snap, crackle and pop (I kid you not — these are the technical terms). Jerk is the rate of change of acceleration and hence can be defined as (takes a deep breath) the rate of change of a rate of change of rate of change. More can be found in the fascinating article by Eager, Pendrill and Reistad (2016) linked to above.

But on a much more prosaic level, acceleration can be defined as a = (v – u) / t where v is the final instantaneous velocity, u is the inital instantaneous velocity and t is the time taken for the change.

The instantaneous velocity is the velocity at a momentary instant of time. It is, if you like, the velocity indicated by the needle on a speedometer at a single instant of time and is different from the average velocity which is calculated from the total distance travelled divided the time taken.

This can be shown in diagram form like this:

However, our experiment is simplified because we made sure that the toy car was stationary when the timer was zero; in other words, we ensured u = 0 m/s.

This simplifies a = (v – u) / t to a = v / t.

But how can we find v, the instantaneous velocity at the end of the journey when we have no direct means of measuring it, such as a speedometer or a light gate?

No more jerks left to give

Let’s assume that, for the toy car, the jerk is zero (again, let me emphasize that jerk is a technical term defined as the rate of change of acceleration).

This means that the acceleration is constant.

This fact allows us to calculate the average velocity using a very simple formula: average velocity = (u + v) / t .

But remember that u = 0 so average velocity = v / 2 .

More pertinently for us, provided that u = 0 and jerk = 0, it allows us to calculate a value for v using v = 2 x (average velocity) .

The spreadsheet linked to above uses this formula to calculate v and then uses a = v / t.

Using this in the school laboratory

This could be done as a demonstration or, since only basic equipment is needed, a class experiment. Students may need access to computers running the spreadsheet during the experiment or soon afterwards. We found that one laptop shared between two groups was sufficient.

First experiment (relationship between force and acceleration): set up as shown in the diagram. Place washers totalling a mass of 0.8 g (or similar) and washers totalling a mass of 0.2 g on the hook or weight stack. Hold the toy car stationary at the start line. Release and start the timer. Stop the timer. Input data into the spreadsheet and repeat with different mass on the hook.

It can be useful to get students to manually “check” the value of a calculated by the spreadsheet to provide low stakes practice of using the acceleration formula.

Second experiment (relationship between mass and acceleration). Keep the accelerating force constant with (say) 0.6 g on the hook or weight stack. Hold the toy car stationary at the start line. Release and start the timer. Stop the timer. Input data into the second tab on the spreadsheet and repeat with 100 g added to the toy car (possibly blu-tac’ed into place).

Conclusion

This blog post grew in the telling. Please let me know if you try the methods outlined here and how successful you found them

References

Eager, D., Pendrill, A. M., & Reistad, N. (2016). Beyond velocity and acceleration: jerk, snap and higher derivatives. European Journal of Physics, 37(6), 065008.

Postscript

You can read Part Deux of this blogpost, which details an adaptation of this experiment to work with dynamics trolleys and other standard laboratory equipment.

The Life and Death of Stars

Stars, so far as we understand them today, are not “alive”.

Now and again we saw a binary and a third star approach one another so closely that one or other of the group reached out a filament of its substance toward its partner. Straining our supernatural vision, we saw these filaments break and condense into planets. And we were awed by the infinitesimal size and the rarity of these seeds of life among the lifeless host of the stars. But the stars themselves gave an irresistible impression of vitality. Strange that the movements of these merely physical things, these mere fire-balls, whirling and traveling according to the geometrical laws of their minutest particles, should seem so vital, so questing.

Olaf Stapledon, Star Maker (1937)

Star Maker Cover

And yet, it still makes sense to speak of a star being “born”, “living” and even “dying”.

We have moved on from Stapledon’s poetic description of the formation of planets from a filament of star-stuff gravitationally teased-out by a near-miss between passing celestial orbs. This was known as the “Tidal Hypothesis” and was first put forward by Sir James Jeans in 1917. It implied that planets circling stars would be an incredibly rare occurrence.

Today, it would seem that the reverse is true: modern astronomy tells us that planets almost inevitably form as a nebula collapses to form a star. It appears that stars with planetary systems are the norm, rather than the exception.

Be that as it may, the purpose of this post is to share a way of teaching the “life cycle” of a star that I have found useful, and that many students seem to appreciate. It uses the old trick of using analogy to “couch abstract concepts in concrete terms” (Steven Pinker’s phrase).

Screen Shot 2018-06-24 at 16.49.15.png

I find it humbling to consider that currently there are no black dwarf stars anywhere in the observable universe, simply because the universe isn’t old enough. The universe is merely 13.7 billion years old. Not until the universe is some 70 000 times its current age (about 1015 years old) will enough time have elapsed for even our oldest white dwarfs to have cooled to become a black dwarf. If we take the entire current age of the universe to be one second past midnight on a single 24-hour day, then the first black dwarfs will come into existence at 8 pm in the evening…

And finally, although to the best of our knowledge, stars are in no meaningful sense “alive”, I cannot help but close with a few words from Stapledon’s riotous and romantic imaginative tour de force that is yet threaded through with the disciplined sinews of Stapledon’s understanding of the science of his day:

Stars are best regarded as living organisms, but organisms which are physiologically and psychologically of a very peculiar kind. The outer and middle layers of a mature star apparently consist of “tissues” woven of currents of incandescent gases. These gaseous tissues live and maintain the stellar consciousness by intercepting part of the immense flood of energy that wells from the congested and furiously active interior of the star. The innermost of the vital layers must be a kind of digestive apparatus which transmutes the crude radiation into forms required for the maintenance of the star’s life. Outside this digestive area lies some sort of coordinating layer, which may be thought of as the star’s brain. The outermost layers, including the corona, respond to the excessively faint stimuli of the star’s cosmical environment, to light from neighbouring stars, to cosmic rays, to the impact of meteors, to tidal stresses caused by the gravitational influence of planets or of other stars. These influences could not, of course, produce any clear impression but for a strange tissue of gaseous sense organs, which discriminate between them in respect of quality and direction, and transmit information to the correlating “brain” layer.

Olaf Stapledon, Star Maker (1937)