From the Earth to the Moon in 270 BC

The brain is wider than the sky,

For, put them side by side,

The one the other will include

With ease, and you beside.

Emily Dickinson

How did human beings first work out the distance from the Earth to the Moon?

Aristrarchus of Samos (310 BC – 230 BC) figured out a way to do so in terms of the radius of the Earth in 270 BC. Combined with Eratosthenes’ measurement of the radius of the Earth (c. 240 BC) it enabled people to calculate the actual distance to the Moon. The ancient Greeks used a measurement of distance called stadia (singular: stadium) but we will present the measurements here in terms of kilometres.

Magic with a shadow, not with mirrors

Aristarchus used the fact that the Moon passes through the Earth’s shadow during a total lunar eclipse, which happen once every two to three years on average.

What does a total lunar eclipse look like? Watch this amazing 33 second time lapse video from astrophotographer Bartosz Wojczyński.

https://www.youtube.com/watch?v=LK_44AbfH2Q Note that Mr Wojczyński altered the exposure time of each shot to compensate for the reduced brightness of the Moon as it crossed into the shadow. For reference, the exposure time for the brightly lit Moon was 1/2500 second, and for the dim ‘Blood Moon’ (turned red by sunlight refracted by the Earth’s atmosphere) it was 6 seconds.

The video is sped up so that 1 second of video represents 8 minutes of real time. In the video, the Moon is in shadow for 24 seconds which equates to 8 x 24 = 192 minutes or 3 hours 12 minutes. We will use this later to model Aristarchus’ original calculation.

It’s always Aristarchus before the dawn…

Aristarchus began with the assumption that the Earth of radius r creates a cylinder of shadow that is 2r wide as shown in the diagram below.

The Moon orbits the Earth on a roughly circular path of radius R so it cover a total distance of 2πR. This means that its average speed over its whole journey is 2πR/T where T is the orbital period of the Moon, which is 27.3 days or 27.3 x 24 = 655.2 hours.

The average speed of the Moon as it passes through the Earth’s shadow is 2r / t where t is the time for a lunar eclipse (3 hours 12 minutes, in our example).

The average speed of the speed of the Moon is the same in both instances so we can write:

We can simplify by cancelling out the common factor of two:

Then we can rearrange to make R the subject:

Putting in values for t = 3 hours 12 minutes or 3.2 hours, T = 655.2 hours and Eratosthenes’ value for the radius of the Earth r = 6371 km (which was established a few years later):

So now they do it with mirrors…

Aristarchus’ value is just a shade over 7% too large compared with the modern value of the Earth-Moon distance of 384 400 km, but is impressive for a first approximation carried out in antiquity!

The modern value is measured in part by directing laser beams on to special reflectors left on the Moon’s surface by the Apollo astronauts and also the automated Lunokhod missions. Under ideal conditions, this method can measure the Earth-Moon distance to the nearest millimetre.

Quibbles, Caveats and Apologies

Aristarchus’ estimate was too large in part because of his assumption that Earth’s shadow was a cylinder with a uniform diameter. The Sun is an extended light source so Earth’s shadow forms a cone as shown below.

The value of t is smaller than it would if the shadow was 2r wide, leading to a too-large value of R using Aristarchus’ method.

Also, the plane of the Moon’s orbit is tilted with respect to the plane of the Earth’s orbit. This means that the path of the Moon during an eclipse might not pass through the ‘thickest’ part of the shadow. Aristarchus used the average time t calculated from a number of lunar eclipses.

When timing the lunar eclipse shown in Mr Wojczyński’s excellent video, I started the clock when the leading edge of the Moon entered the shadow, but I confess that I ‘cheated’ a little bit by not stopping the clock when the leading edge of the Moon left the shadow — the error is entirely mine and was deliberate in order to arrive at a reasonable value of R for pedagogic impact.

UPDATE: You could also watch this stunning visualisation of a lunar eclipse from Andrew McCarthy where the shadow of the Earth is tracked rather than the Moon.


This is part 2 of a series exploring how humans ‘measured the size of the sky’.

Part 1: How Eratosthenes measured the size of the Earth

Part 3: How Aristarchus measured the distance from the Earth to the Sun

Measuring the radius of the Earth in 240 BC

The brain is wider than the sky,
For, put them side by side,
The one the other will include
With ease, and you beside.

Emily Dickinson, ‘The Brain’

Most science teachers find that ‘Space’ is one of the most enduringly fascinating topics for many students: the sense of wonder engendered as our home planet becomes lost in the empty vastness of the Solar System, which then becomes lost in the trackless star-studded immensity of the Milky Way galaxy, is a joy to behold.

But a common question asked by students is: How do we know all this? How do we know the distance to the nearest star to the Sun is 4 light-years? Or how do we know the distance to the Sun? Or the Moon?

I admit, with embarrassment, that I used to answer with a casual and unintentionally-dismissive ‘Oh well, scientists have measured them!’ which (though true) must have sounded more like a confession of faith rather than a sober recounting of empirical fact. Which, to be fair, it probably was; simply because I had not yet made the effort to find out how these measurements were first taken.

The technological resources available to our ancestors would seem primitive and rudimentary to our eyes but, coupled with the deep well of human ingenuity that I like to think is a hallmark of our species, it proved not just ‘world-beating’ but ‘universe-beating’.

I hope you enjoy this whistle stop tour of this little-visited corner of the scientific hinterland, and choose to share some these stories with your students. It is good to know that the brain is indeed ‘wider than the sky’.

I have presented this in a style and format suitable for sharing and discussing with KS3/KS4 students (11-16 year olds).

Mad dogs and Eratosthenes go out in the midday Sun…

To begin at the beginning: the first reliable measurement of the size of the Earth was made in 240 BC and it all began (at least in this re-telling) with the fact that Eratosthenes liked talking to tourists. (‘Err-at-oss-THen-ees’ with the ‘TH’ said as in ‘thermometer’ — never forget that students of all ages often welcome help in learning how to pronounce unfamiliar words)

Alexandria (in present day Egypt) was a thriving city and a tourist magnet. Eratosthenes made a point of speaking to as many visitors as he could. Their stories, taken with a pinch of salt, were an invaluable source of information about the wider world. Eratosthenes was chief librarian of the Library of Alexandria, regarded as one of the Seven Wonders of the World at the time, and considered it his duty to collect, catalogue and classify as much information as he could.

One visitor, present in Alexandria on the longest day of the year (June 21st by our calendar), mentioned something in passing to Eratosthenes that the Librarian found hard to forget: ‘You know,’ said the visitor, ‘at noon on this day, in my home town there are no shadows.’

How could that be? pondered Eratosthenes. There was only one explanation: the Sun was directly overhead at noon on that day in Syene (the tourist’s home town, now known as Aswan).

The same was not true of Alexandria. At noon, there was a small but noticeable shadow. Eratosthenes measured the angle of the shadow at midday on the longest day. It was seven degrees.

No shadows at Syene, but a 7 degree shadow at Alexandria at the exact same time. Again, there was only one explanation: Alexandria was ’tilted’ by 7 degrees with respect to Syene.

Seven degrees of separation

The sphericity of the Earth had been recognised by astronomers from c. 500 BC so this difference was no surprise to Eratosthenes, but what he realised that since he was comparing the length of shadows at two places on the Earth’s surface at the same time then the 7o wasn’t just the angle of the shadow: 7o was the angle subtended at the centre of the Earth by radial lines drawn from both locations.

Eratosthenes paid a person to pace out the distance between Alexandria and Syene. (This was not such an odd request as it sounds to our ears: in the ancient world there were professionals called bematists who were trained to measure distances by counting their steps.)

It took the bematist nearly a month to walk that distance and it turned out to be 5000 stadia or 780 km by our measurements.

Eratosthenes then used a simple ratio method to calculate the circumference of the Earth, C:

Then:

The modern value for the radius of the Earth is 6371 km.

Ifs and buts…

There is still some debate as to the actual length of one Greek stadium but Eratosthenes’ measurement is generally agreed to within 1-2% of the modern value.

Sadly, none of the copies of the book where Eratosthenes explained his method called On the measure of the earth have survived from antiquity so the version presented here is a simplified one outlined by Cleomedes in a later book. For further details, readers are directed to the excellent Wikipedia article on Eratosthenes.

Astronomer Carl Sagan also memorably explained this method in his 1980 TV documentary series Cosmos.

You might want to read…

This is part of a series exploring how humans ‘measured the size of the sky’:

Part 2: How Aristarchus measured the distance between the Earth and the Moon

Part 3: How Aristarchus measured the distance between the Earth and the Sun

Binding energy: the pool table analogy

Nuclear binding energy and binding energy per nucleon are difficult concepts for A-level physics students to grasp. I have found the ‘pool table analogy’ that follows helpful for students to wrap their heads around these concepts.

Background

Since mass and energy are not independent entities, their separate conservation principles are properly a single one — the principle of conservation of mass-energy. Mass can be created or destroyed , but when this happens, an equivalent amount of energy simultaneously vanishes or comes into being, and vice versa. Mass and energy are different aspects of the same thing.

Beiser 1987: 29

E = mc2

There, I’ve said it. This is the first time I have directly referred to this equation since starting this blog in 2013. I suppose I have been more concerned with the ‘andallthat‘-ness side of things rather than E=mc2. Well, no more! E=mc2 will form the very centre of this post. (And about time too!)

The E is for ‘rest energy’: that is to say, the energy an object of mass m has simply by virtue of being. It is half the energy that would be liberated if it met its antimatter doppelganger and particles and antiparticles annihilated each other. A scientist in a popular novel sternly advised a person witnessing an annihilation event to ‘Shield your eyes!’ because of the flash of electromagnetic radiation that would be produced.

Well, you could if you wanted to, but it wouldn’t do much good since the radiation would be in the form of gamma rays which are to human eyes what the sound waves from a silent dog whistle are to human ears: beyond the frequency range that we can detect.

The main problem is likely to be the amount of energy released since the conversion factor is c2: that is to say, the velocity of light squared. For perspective, it is estimated that the atomic bomb detonated over Hiroshima achieved its devastation by directly converting only 0.0007 kg of matter into energy. (That would be 0.002% of the 38.5 kg of enriched uranium in the bomb.)

Matter contains a lot of energy locked away as ‘rest energy’. But these processes which liberate rest energy are mercifully rare, aren’t they?

No, they’re not. As Arthur Beiser put it in his classic Concepts of Modern Physics:

In fact, processes in which rest energy is liberated are very familiar. It is simply that we do not usually think of them in such terms. In every chemical reaction that evolves energy, a certain amount of matter disappears, but the lost mass is so small a fraction of the total mass of the reacting substances that it is imperceptible. Hence the ‘law’ of conservation of mass in chemistry.

Beiser 1987: 29

Building a helium atom

The constituents of a helium nucleus have a greater mass when separated than they do when they’re joined together.

Here, I’ll prove it to you:

The change in mass due to the loss of energy as the constituents come together is appreciable as a significant fraction of its original mass. Although 0.0293/4.0319*100% = 0.7% may not seem like a lot, it’s enough of a difference to keep the Sun shining.

The loss of energy is called the binding energy and for a helium atom it corresponds to a release of 27 MeV (mega electron volts) or 4.4 x 10-12 joules. Since there are four nucleons (particles that make up a nucleus) then the binding energy per nucleon (which is a guide to the stability of the new nucleus) is some 7 MeV.

But why must systems lose energy in order to become more stable?

The Pool Table Analogy for binding energy

Imagine four balls on a pool table as shown.

The balls have the freedom to move anywhere on the table in their ‘unbound’ configuration.

However, what if they were knocked into the corner pocket?

To enter the ‘bound’ configuration they must lose energy: in the case of the pool balls we are talking about gravitational potential energy, a matter of some 0.30 J per ball or a total energy loss of 4 x 0.30 = 1.2 joules.

The binding energy of a pool table ‘helium nucleus’ is thus some 1.2 joules while the ‘binding energy per nucleon’ is 0.30 J. In other words, we would have to supply 1.2 J of energy to the ‘helium nucleus’ to break the forces binding the particles together so they can move freely apart from each other.

Just as a real helium nucleus, the pool table system becomes more stable when some of its constituents lose energy and less stable when they gain energy.


Reference

Beiser, A. (1987). Concepts of modern physics. McGraw-Hill Companies.

Visualising How Transformers Work

‘Transformers’ is one of the trickier topics to teach for GCSE Physics and GCSE Combined Science.

I am not going to dive into the scientific principles underlying electromagnetic induction here (although you could read this post if you wanted to), but just give a brief overview suitable for a GCSE-level understanding of:

  • The basic principle of a transformer; and
  • How step down and step up transformers work.

One of the PowerPoints I have used for teaching transformers is here. This is best viewed in presenter mode to access the animations.

The basic principle of a transformer

A GIF showing the basic principle of a transformer.
(BTW This can be copied and pasted into a presentation if you wish,)

The primary and secondary coils of a transformer are electrically isolated from each other. There is no charge flow between them.

The coils are also electrically isolated from the core that links them. The material of the core — iron — is chosen not for its electrical properties but rather for its magnetic properties. Iron is roughly 100 times more permeable (or transparent) to magnetic fields than air.

The coils of a transformer are linked, but they are linked magnetically rather than electrically. This is most noticeable when alternating current is supplied to the primary coil (green on the diagram above).

The current flowing in the primary coil sets up a magnetic field as shown by the purple lines on the diagram. Since the current is an alternating current it periodically changes size and direction 50 times per second (in the UK at least; other countries may use different frequencies). This means that the magnetic field also changes size and direction at a frequency of 50 hertz.

The magnetic field lines from the primary coil periodically intersect the secondary coil (red on the diagram). This changes the magnetic flux through the secondary coil and produces an alternating potential difference across its ends. This effect is called electromagnetic induction and was discovered by Michael Faraday in 1831.

Energy is transmitted — magnetically, not electrically — from the primary coil to the secondary coil.

As a matter of fact, a transformer core is carefully engineered so to limit the flow of electrical current. The changing magnetic field can induce circular patterns of current flow (called eddy currents) within the material of the core. These are usually bad news as they heat up the core and make the transformer less efficient. (Eddy currents are good news, however, when they are created in the base of a saucepan on an induction hob.)

Stepping Down

One of the great things about transformers is that they can transform any alternating potential difference. For example, a step down transformer will reduce the potential difference.

A GIF showing the basic principle of a step down transformer.
(BTW This can be copied and pasted into a presentation if you wish,)

The secondary coil (red) has half the number of turns of the primary coil (green). This halves the amount of electromagnetic induction happening which produces a reduced output voltage: you put in 10 V but get out 5 V.

And why would you want to do this? One reason might be to step down the potential difference to a safer level. The output potential difference can be adjusted by altering the ratio of secondary turns to primary turns.

One other reason might be to boost the current output: for a perfectly efficient transformer (a reasonable assumption as their efficiencies are typically 90% or better) the output power will equal the input power. We can calculate this using the familiar P=VI formula (you can call this the ‘pervy equation’ if you wish to make it more memorable for your students).

Thus: Vp Ip = Vs Is so if Vs is reduced then Is must be increased. This is a consequence of the Principle of Conservation of Energy.

Stepping up

A GIF showing the basic principle of a step up transformer.
(BTW This can be copied and pasted into a presentation if you wish,)

There are more turns on the secondary coil (red) than the primary (green) for a step up transformer. This means that there is an increased amount of electromagnetic induction at the secondary leading to an increased output potential difference.

Remember that the universe rarely gives us something for nothing as a result of that damned inconvenient Principle of Conservation of Energy. Since Vp Ip = Vs Is so if the output Vs is increased then Is must be reduced.

If the potential difference is stepped up then the current is stepped down, and vice versa.

Last nail in the coffin of the formula triangle…

Although many have tried, you cannot construct a formula triangle to help students with transformer calculations.

Now is your chance to introduce students to a far more sensible and versatile procedure like FIFA (more details on the PowerPoint linked to above)

A Gnome-inal Value for ‘g’

The Gnome Experiment Kit from precision scale manufacturers Kern and Sohn.

. . . setting storms and billows at defiance, and visiting the remotest parts of the terraqueous globe.

Samuel Johnson, The Rambler, 17 April 1750

That an object in free fall will accelerate towards the centre of our terraqueous globe at a rate of 9.81 metres per second per second is, at best, only a partial and parochial truth. It is 9.81 metres per second per second in the United Kingdom, yes; but the value of both acceleration due to free fall and the gravitational field strength vary from place to place across the globe (and in the SI System of measurement, the two quantities are numerically equal and dimensionally equivalent).

For example, according to Hirt et al. (2013) the lowest value for g on the Earth’s surface is atop Mount Huascarán in Peru where g = 9.7639 m s-2 and the highest is at the surface of the Arctic Ocean where g = 9.8337 m s-2.

Why does g vary?

There are three factors which can affect the local value of g.

Firstly, the distribution of mass within the volume of the Earth. The Earth is not of uniform density and volumes of rock within the crust of especially high or low density could affect g at the surface. The density of the rocks comprising the Earth’s crust varies between 2.6 – 2.9 g/cm3 (according to Jones 2007). This is a variation of 10% but the crust only comprises about 1.6% of the Earth’s mass since the density of material in the mantle and core is far higher so the variation in g due this factor is probably of the order of 0.2%.

Secondly, the Earth is not a perfect sphere but rather an oblate spheroid that bulges at the equator so that the equatorial radius is 6378 km but the polar radius is 6357 km. This is a variation of 0.33% but since the gravitational force is proportional to 1/r2 let’s assume that this accounts for a possible variation of the order of 0.7% in the value of g.

Thirdly, the acceleration due to the rotation of the Earth. We will look in detail at the theory underlying this in a moment, but from our rough and ready calculations above, it would seem that this is the major factor accounting for any variation in g: that is to say, g is a minimum at the equator and a maximum at the poles because of the Earth’s rotation.


The Gnome Experiment

In 2012, precision scale manufacturers Kern and Sohn used this well-known variation in the value of g to embark on a highly successful advertising campaign they called the ‘Gnome Experiment’ (see link 1 and link 2).

Whatever units their lying LCD displays show, electronic scales don’t measure mass or even weight: they actually measure the reaction force the scales exert on the item in their top pan. The reading will be affected if the scales are accelerating.

In diagram A, the apple is not accelerating so the resultant upward force on the apple is exactly 0.981 N. The scales show a reading of 0.981/9.81 = 0.100 000 kg = 100.000 g (assuming, of course, that they are calibrated for use in the UK).

In diagram B, the apple and scales are in an elevator that is accelerating upward at 1.00 metres per second per second. The resultant upward force must therefore be larger than the downward weight as shown in the free body diagram. The scales show a reading of 1.081/9.81 – 0.110 194 kg = 110.194 g.

In diagram C, the the apple and scales are in an elevator that is accelerating downwards at 1.00 metres per second per second. The resultant upward force must therefore be smaller than the downward weight as shown in the free body diagram. The scales show a reading of 0.881/9.81 – 0.089 806 kg = 89.806 g.


Never mind the weight, feel the acceleration

Now let’s look at the situation the Kern gnome mentioned above. The gnome was measured to have a ‘mass’ (or ‘reaction force’ calibrated in grams, really) of 309.82 g at the South Pole.

Showing this situation on a diagram:

Looking at the free body diagram for Kern the Gnome at the equator, we see that his reaction force must be less than his weight in order to produce the required centripetal acceleration towards the centre of the Earth. Assuming the scales are calibrated for the UK this would predict a reading on the scales of 3.029/9.81= 0.30875 kg = 308.75 g.

The actual value recorded at the equator during the Gnome Experiment was 307.86 g, a discrepancy of 0.3% which would suggest a contribution from one or both of the first two factors affecting g as discussed at the beginning of this post.

Although the work of Hirt et al. (2013) may seem the definitive scientific word on the gravitational environment close to the Earth’s surface, there is great value in taking measurements that are perhaps more directly understandable to check our comprehension: and that I think explains the emotional resonance that many felt in response to the Kern Gnome Experiment. There is a role for the ‘artificer’ as well as the ‘philosopher’ in the scientific enterprise on which humanity has embarked, but perhaps Samuel Johnson put it more eloquently:

The philosopher may very justly be delighted with the extent of his views, the artificer with the readiness of his hands; but let the one remember, that, without mechanical performances, refined speculation is an empty dream, and the other, that, without theoretical reasoning, dexterity is little more than a brute instinct.

Samuel Johnson, The Rambler, 17 April 1750

References

Hirt, C., Claessens, S., Fecher, T., Kuhn, M., Pail, R., & Rexer, M. (2013). New ultrahigh‐resolution picture of Earth’s gravity fieldGeophysical research letters40(16), 4279-4283.

Jones, F. (2007). Geophysics Foundations: Physical Properties: Density. University of British Columbia website, accessed on 2/5/21.

<

Nature Abhors A Change In Flux

Aristotle memorably said that Nature abhors a vacuum: in other words. he thought that a region of space entirely devoid of matter, including air, was logically impossible.

Aristotle turned out to be wrong in that regard, as he was in numerous others (but not quite as many as we – secure and perhaps a little complacent and arrogant as we look down our noses at him from our modern scientific perspective – often like to pretend).

An amusing version which is perhaps more consistent with our current scientific understanding was penned by D. J. Griffiths (2013) when he wrote: Nature abhors a change in flux.

Magnetic flux (represented by the Greek letter phi, Φ) is a useful quantity that takes account of both the strength of the magnetic field and its extent. It is the total ‘magnetic flow’ passing through a given area. You can also think of it as the number of magnetic field lines multiplied by the area they pass through so a strong magnetic field confined to a small area might have the same flux (or ‘effect’) as weaker field spread out over a large area.


Lenz’s Law

Emil Lenz formulated an earlier statement of the Nature abhors a change of flux principle when he stated what I think is the most consistently underrated laws of electromagnetism, at least in terms of developing students’ understanding:

The current induced in a circuit due to a change in a magnetic field is directed to oppose the change in flux and to exert a mechanical force which opposes the motion.

Lenz’s Law (1834)

This is a qualitative rather than a quantitive law since it is about the direction, not the magnitude, of an induced current. Let’s look at its application in the familiar A-level Physics context of dropping a bar magnet through a coil of wire.


Dropping a magnet through a coil in pictures

Picture 1

In picture 1 above, the magnet is approaching the coil with a small velocity v. The magnet is too far away from the coil to produce any magnetic flux in the centre of the coil. (For more on the handy convention I have used to draw the coils and show the current flow, please click on this link.) Since there is no magnetic flux, or more to the point, no change in magnetic flux, then by Faraday’s Law of Electromagnetic Induction there is no induced current in the coil.

Picture 2

in picture 2, the magnet has accelerated to a higher velocity v due to the effect of gravitational force. The magnet is now close enough so that it produces a magnetic flux inside the coil. More to the point, there is an increase in the magnetic flux as the magnet gets closer to the coil: by Faraday’s Law, this produces an induced current in the coil (shown using the dot and cross convention).

To ascertain the direction of the current flow in the coil we can use Lenz’s Law which states that the current will flow in such a way so as to oppose the change in flux producing it. The red circles show the magnetic field lines produced by the induced current. These are in the opposite direction to the purple field lines produced by the bar magnet (highlighted yellow on diagram 2): in effect, they are attempting to cancel out the magnetic flux which produce them!

The direction of current flow in the coil will produce a temporary north magnetic pole at the top of the coil which, of course, will attempt to repel the falling magnet; this is ‘mechanical force which opposes the motion’ mentioned in Lenz’s Law. The upward magnetic force on the falling magnet will make it accelerate downward at a rate less than g as it approaches the coil.

Picture 3

In picture 3, the purple magnetic field lines within the volume of the coil are approximately parallel so that there will be no change of flux while the magnet is in this approximate position. In other words, the number of field lines passing through the cross-sectional area of the coil will be approximately constant. Using Faraday’s Law, there will be no flow of induced current. Since there is no change in flux to oppose, Lenz’s Law does not apply. The magnet will be accelerating downwards at g.

Picture 4

As the magnet emerges from the bottom of the coil, the magnetic flux through the coil decreases. This results in a flow of induced current as per Faraday’s Law. The direction of induced current flow will be as shown so that the red field lines are in the same direction as the purple field lines; Lenz’s Law is now working to oppose the reduction of magnetic flux through the coil!

A temporary north magnetic pole is generated by the induced current at the lower end of the coil. This will produce an upward magnetic force on the falling magnet so that it accelerates downward at a rate less than g. This, again, is the ‘mechanical force which opposes the motion’ mentioned in Lenz’s Law.


Dropping a magnet through a coil in graphical form

This would be one of my desert island graphs since it is such a powerfully concise summary of some beautiful physics.

The graph shows the reversal in the direction of the current as discussed above. Also, the maximum induced emf in region 2 (blue line) is less than that in region 4 (red line) since the magnet is moving more slowly.

What is more, from Faraday’s Law (where ℇ is the induced emf and N is total number of turns of the coil), the blue area is equal to the red area since:

and N and ∆Φ are fixed values for a given coil and bar magnet.

As I said previously, there is so much fascinating physics in this graph that I think it worth exploring in depth with your A level Physics students 🙂

Other news

If you have enjoyed this post, then you may be interested to know that I’ve written a book! Cracking Key Concepts in Secondary Science (co-authored with Adam Boxer and Heena Dave) is due to be published by Corwin in July 2021.

References

Lenz, E. (1834), “Ueber die Bestimmung der Richtung der durch elektodynamische Vertheilung erregten galvanischen Ströme”, Annalen der Physik und Chemie107 (31), pp. 483–494

Griffiths, David (2013). Introduction to Electrodynamics. p. 315.

The Coulomb Train Model Revisited (Part 5)

In this post, we are going to look at series circuits using the Coulomb Train Model.

The Coulomb Train Model (CTM) is a helpful model for both explaining and predicting the behaviour of real electric circuits which I think is useful for KS3 and KS4 students.

Without further ado, here is a a summary.


A circuit with one resistor

Let’s look at a very simple circuit to begin with:

This can be represented on the CTM like this:

The ammeter counts 5 coulombs passing every 10 seconds, so the current I = charge flow Q / time t = 5 coulombs / 10 seconds = 0.5 amperes.

We assume that the cell has a potential difference of 1.5 V so there is a potential difference of 1.5 V across the resistor R1 (that is to say, each coulomb loses 1.5 J of energy as it passes through R1).

The resistor R1 = potential difference V / current I = 1.5 / 0.5 = 3.0 ohms.


A circuit with two resistors in series

Now let’s add a second identical resistor R2 into the circuit.

This can be shown using the CTM like this:

Notice that the current in this example is smaller than in the first circuit; that is to say, fewer coulombs go through the ammeter in the same time. This is because we have added a second resistor and remember that resistance is a property that reduces the current. (Try and avoid talking about a high resistance ‘slowing down’ the current because in many instances such as two conductors in parallel a high current can be modelled with no change in the speed of the coulombs.)

Notice also that the voltmeter is making identical measurements on both the circuit diagram and the CTM animation. It is measuring the total energy change of the coulombs as they pass through both R1 and R2.

The current I = charge flow Q / time t = 5 coulombs / 20 seconds = 0.25 amps. This is half the value of the current in the first circuit.

We have an identical cell of potential difference 1.5 V the voltmeter would measure 1.5 V. We can calculate the total resistance using R = V / I = 1.5 / 0.25 = 6.0 ohms.

This is to be expected since the total resistance R = R1 + R2 and R1 = 3.0 ohms and R2 = 3.0 ohms.


Looking at the resistors individually

The above circuit can be represented using the CTM as follows:

Between A and B, the coulombs are each gaining 1.5 joules since the cell has a potential difference of 1.5 V. (Remember that V = E energy transferred (in joules) / Q charge flow (in coulombs.)

Between B and C the coulombs lose no energy; that is to say, we are assuming that the connecting wires have negligible resistance.

Between C and D the coulombs lose some energy. We can use the familar V = I x R to calculate how much energy is lost from each coulomb, since we know that R1 is 3.0 ohms and I is 0.25 amperes (see previous section).

V = I x R = 0.25 x 3.0 = 0.75 volts.

That is to say, 0.75 joules are removed from each coulomb as they pass through R1 which means that (since 1.5 joules were added to each coulomb by the cell) that 0.75 joules are left in each coulomb.

The coulombs do not lose any energy travelling between D and E because, again, we are assuming negligible resistance in the connecting wire.

0.75 joules is removed from each coulomb between E and F making the potential difference across R2 to be 0.75 volts.

Thus we find that the familiar V = V1 + V2 is a direct consequence of the Principle of Conservation of Energy.


FAQ: ‘How do the coulombs know to drop off only half their energy in R1?’

Simple answer: they don’t.

This may be a valid objection for some donation models of electric circuits (such as the pizza delivery van model) but it doesn’t apply to the CTM because it is a continuous chain model (with the caveat that the CTM applies only to ‘steady state’ circuits where the current is constant).

Let’s look at a numerical argument to support this:

  • The magnitude of the current is controlled by only two factors: the potential difference of the cell and the total resistance of the circuit.
  • In other words, if we increased the value of R1 to (say) 4 ohms and reduced the value of R2 to 2 ohms so that the total resistance was still 6 ohms, the current would still be 0.25 amps.
  • However, in this case the energy dissipated by each coulomb passing through R1 would V = I x R = 0.25 x 4 = 1 volt (or 1 joule per coulomb) and similarly the potential difference across R2 would now be 0.5 volts.
  • The coulombs do not ‘know’ to drop off 1 joule at R1 and 0.5 joules at R2: rather, it is a purely mechanical interaction between the moving coulombs and each resistor.
  • R1 has a bigger proportion of the total resistance of the circuit than R2 so it seems self-evident (at least to me) that the coulombs will lose a larger proportion of their total energy passing through R1.
  • A similar analysis would apply if we made R2 = 4 ohms and R1 = 2 ohms: the coulombs would now lose 0.5 joules passing through R1 and 1 joule passing through R2.

Thus, we see that the current in a series circuit is affected by the ‘global’ or ‘whole circuit’ properties such as the potential difference of the cell and the total resistance of the circuit. The CTM models this property of real circuits by being a continuous chain of mechanically-linked ‘trucks’ so that a change in any one part of the circuit affects the movement of all the coulombs.

However, the proportion of the energy lost by a coulomb travelling through one part of the circuit is affected — not by ‘magic’ or a weird form of ‘coulomb telepathy’ — but only by the ‘local’ properties of that section of the circuit i.e. the electrical resistance of that section.

The CTM analogue of a low resistance section of a circuit (top) and a high resistance section of a circuit (bottom)

(PS You can read more about the CTM and potential divider circuits here.)


Afterword

You may be relieved to hear that this is the last post in my series on ‘The CTM revisited’. My thanks to the readers who have stayed with me through the series (!)

I will close by saying that I have appreciated both the expressions of enthusiasm about CTM and the thoughtful criticisms of it.

Mnemonics for the S.I. Prefixes

The S.I. System of Weights and Measures may be a bit of a dog’s dinner, but at least it’s a dog’s dinner prepped, cooked, served and — more to the point — eaten by scientists.

A brief history of the Système international d’unités

It all began with the métre (“measure”), of course. This was first proposed as a universal measure of distance by the post-Revolutionary French Academy of Sciences in 1791. According to legend (well, not legend precisely — think of it as random speculative gossip, if you prefer), they first proposed that the metre should be one millionth of the distance from the North Pole to the equator.

When that turned out to be a little on the large side, they reputedly shrugged in that inimitable Gallic fashion said: “D’accord, faisons un dix millionième alors, mais c’est ma dernière offre.” (“OK, let’s make it one ten millionths then, but that’s my final offer.”)

Since then, what measurement-barbarians loosely (and egregiously incorrectly) call “the metric system” has been through many iterations and revisions to become the S.I. System. Its full name is the Système international d’unités which pays due honour to France’s pivotal role in developing and sustaining it.

When some of those same measurement-barbarians call for a return to the good old “pragmatic” Britsh Imperial System of inches and poundals, I urge all fair-minded people to tell them, as kindly as possible, that they can’t: not now, not ever.

Since 1930, the inch has been defined as 25.4 millimetres. (It was, so I believe, the accuracy and precision needed to design and build jet engines that led to the redefinition. The older definitions of the inch simply weren’t precise enough.)

You simply cannot replace the S.I. system, you can, however, dress it up a little bit and call a distance of 25.4 millimetres “one inch” if you really wanted to — but, in the end, what would be the point of that?

The Power of Three (well, ten to the third power, anyways)

For human convenience, the S.I. system includes prefixes. So a large distance might measured in kilometres where the prefix kilo- indicates multiplying by a factor of 1000 (or 10 raised to the third power). The distance between the Globe Theatre in London and Slough Station is 38.6 km. Longer distances such as London and New York, NY would be 5.6 megametres (or 5.6 Mm — note capital ‘M’ for mega [one million] to avoid confusion with the prefix milli- ).

The S.I. System has prefixes for all occasions, as shown below.

The ‘big’ SI prefixes.
Note that every one of them, except for kilo, is represented by a capital letter.

Note also that one should convert all prefixes into standard units for calculations e.g. meganewtons should be converted to newtons. The sole exception is kilograms because the base unit is the kilogram not the gram, so a megagram should be converted into kilograms, not grams. I trust that’s clear. (Did I mention the “dog’s dinner” part yet?)

For perspective, the distance between Earth and the nearest star outside our Solar System is 40 petametres, and current age of the universe is estimated to be 0.4 exaseconds (give or take a petasecond or two).

A useful mnemonic for remembering these is Karl Marx Gives The Proletariat Eleven Zeppelins (and one can imagine the proletariat expressing their gratitude by chanting in chorus: “Yo! Ta, Mr Marx!” as they march bravely forward.)

Karl Marx Gives The Proletariat Eleven Zeppelins (“Yo! Ta, Mr. Marx!)

But what about the little prefixes?

Milli- we have already covered above. The diameter of one of your red blood cells in 8 micrometres and the time it takes light to travel a distance equal to the diameter of a hydrogen atom is 300 zeptoseconds.

Again, there is an SI prefix for every occasion:

The ‘little’ SI prefixes.
(Handily, all of them are represented by lower case letters — including micro which is shown the lower case Greek letter ‘mu’)

A useful mnemonic would be: Millie’s Microphone Needs a Platform For Auditioning Zebra Yodellers.

For the record, GCSE Physics students are expected to know the SI prefixes between giga- and pico-, but if you’re in for a pico- then you’re in for anything between a yotta- and a yocto- in my opinion (if you catch my drift).

Very, very, very small to very, very, very big

The mean lifetime of a Z-boson (the particle that carries the Weak force) is 0.26 yoctoseconds.

According to our current understanding of physics, the stars will have stopped shining and all the galaxies will dissipate into dissassociated ions some 315 yottaseconds from now.

Apart from that, happy holidays everyone!

The Coulomb Train Model Revisited (Part 4)

In this post, we will look at parallel circuits.

The Coulomb Train Model (CTM) is a helpful model for both explaining and predicting the behaviour of real electric circuits which I think is useful for KS3 and KS4 students.

Without further ado, here is a a summary.

This is part 4 of a continuing series. (Click to read Part 1, Part 2 or Part 3.)


The ‘Parallel First’ Heresy

I advocate teaching parallel circuits before teaching series circuits. This, I must confess, sometimes makes me feel like Captain Rum from Blackadder Two:

The main reason for this is that parallel circuits are conceptually easier to analyse than series circuits because you can do so using a relatively naive notion of ‘flow’ and gives students an opportunity to explore and apply the recently-introduced concept of ‘flow of charge’ in a straightforward context.

Redish and Kuo (2015: 584) argue that ‘flow’ is an example of embodied cognition in the sense that its meaning is grounded in physical experience:

The thesis of embodied cognition states that ultimately our conceptual system grounded in our interaction with the physical world: How we construe even highly abstract meaning is constrained by and is often derived from our very concrete experiences in the physical world.

Redish and Kuo (2015: 569)

As an aside, I would mention that Redish and Kuo (2015) is an enduringly fascinating paper with a wealth of insights for any teacher of physics and I would strongly recommend that everyone reads it (see link in the Reference section).


Let’s Go Parallel First — but not yet

Let’s start with a very simple circuit.

This is not a parallel circuit (yet) because switch S is open. Resistors R1 and R2 are identical.

This can be represented on the coulomb train model like this:

Five coulombs pass through the ammeter in 20 seconds so the current I = Q/t = 5/20 = 0.25 amperes.

Let’s assume we have a 1.5 V cell so 1.5 joules of energy are added to each coulomb as they pass through the cell. Let’s also assume that we have negligible resistance in the cell and the connecting wires so 1.5 joules of energy will be removed from each coulomb as they pass through the resistor. The voltmeter as shown will read 1.5 volts.

The resistance of the resistor R1 is R=V/I = 1.5/0.25 = 6.0 ohms.


Let’s Go Parallel First — for real this time.

Now let’s close switch S.

This is example of changing an example by continuous conversion which removes the need for multiple ammeters in the circuit. The changed circuit can be represented on the CTM as shown

Now, ten coulombs pass through the ammeter in twenty seconds so I = Q/t = 10/20 = 0.5 amperes (double the reading in the first circuit shown).

Questioning may be useful at this point to reinforce the ‘flow’ paradigm that we hope students will be using:

  • What will be the reading if the ammeter moved to a similar position on the other side? (0.5 amps since current is not ‘used up’.)
  • What would be the reading if the ammeter was placed just before resistor R1? (0.25 amps since only half the current goes through R1.)

To calculate the total resistance of the whole circuit we use R = V/I = 1.5/0.5 = 3.0 ohms– which is half of the value of the circuit with just R1. Adding resistors in parallel has the surprising result of reducing the total resistance of the circuit.

This is a concrete example which helps students understand the concept of resistance as a property which reduces current: the current is larger when a second resistor is added so the total resistance must be smaller. Students often struggle with the idea of inverse relationships (i.e. as x increases y decreases and vice versa) so this is a point well worth emphasising.


Potential Difference and Parallel Circuits (1)

Let’s expand on the primitive ‘flow’ model we have been using until now and adapt the circuit a little bit.

This can be represented on the CTM like this:

Each coulomb passing through R2 loses 1.5 joules of energy so the voltmeter would read 1.5 volts.

One other point worth making is that the resistance of R2 (and R1) individually is still R = V/I = 1.5/0.25 = 6.0 ohms: it is only the combined effect of R1 and R2 together in parallel that reduces the total resistance of the circuit.


Potential Difference and Parallel Circuits (2)

Let’s have one last look at a different aspect of this circuit.

This can be represented on the CTM like this:

Each coulomb passing through the cell from X to Y gains 1.5 joules of energy, so the voltmeter would read 1.5 volts.

However, since we have twice the number of coulombs passing through the cell as when switch S is open, then the cell has to load twice as many coulombs with 1.5 joules in the same time.

This means that, although the potential difference is still 1.5 volts, the cell is working twice as hard.

The result of this is that the cell’s chemical energy store will be depleted more quickly when switch S is closed: parallel circuits will make cells go ‘flat’ in a much shorter time compared with a similar series circuit.

Bulbs in parallel may shine brighter (at least in terms of total brightness rather than individual brightness) but they won’t burn for as long.

To some ways of thinking, a parallel circuit with two bulbs is very much like burning a candle at both ends…


More fun and high jinks with coulomb train model in the next instalment when we will look at series circuits.

You can read part 5 here.


Reference

Redish, E. F., & Kuo, E. (2015). Language of physics, language of math: Disciplinary culture and dynamic epistemologyScience & Education24(5), 561-590.

FIFA and Really Challenging GCSE Physics Calculations

‘FIFA’ in this context has nothing to do with football; rather, it is a mnemonic that helps KS3 and KS4 students from across the attainment range engage productively with calculation questions.

FIFA stands for:

  • Formula
  • Insert values
  • Fine-tune
  • Answer

From personal experience, I can say that FIFA has worked to boost physics outcomes in the schools I have worked in. What is especially gratifying, however, is that a number of fellow teaching professionals have been kind enough to share their experience of using it:


Framing FIFA as a modular approach

Straightforward calculation questions (typically 2 or 3 marks) can be ‘unlocked’ using the original FIFA approach. More challenging questions (typically 4 or 5 marks) can often be handled using the FIFA-one-two approach.

However, what about the most challenging 5 or 6 mark questions that are targeted at Grade 8/9? Can FIFA help in solving these?

I believe it can. But before we dive into that, let’s look at a more traditional, non-FIFA, algebraic approach.


A challenging freezing question: the traditional (non-FIFA) algebraic approach

Note: this is a ‘made up’ question written in the style of the GCSE exam.

A pdf of this question is here. A traditional algebraic approach to solving this problem would look like this:

This approach would be fine for confident students with high previous attainment in physics and mathematics. I will go further and say that it should be positively encouraged for students who possess — in Edward Gibbon’s words — that ‘happy disposition’:

But the power of instruction is seldom of much efficacy, except in those happy dispositions where it is almost superfluous.

Edward Gibbon, The Decline and Fall of the Roman Empire

But what about those students who are more akin to the rest of us, and for whom the ‘power of instruction’ is not a superfluity but rather a necessity on which they depend?


A challenging freezing question: the FIFA-1-2-3 approach

Since this question involves both cooling and freezing it seems reasonable to start with the specific heat capacity formula and then use the specific latent heat formula:

FIFA-one-two isn’t enough. We must resort to FIFA-1-2-3.

What is noteworthy here is that the third FIFA formula isn’t on the formula sheet and is not on the list of formulas that need to be memorised. Instead, it is made by the student based on their understanding of physics and a close reading of the question.

Challenging? Yes, undoubtedly. But students will have unlocked some marks (up to 4 out of 6 by my estimation).

FIFA isn’t a royal road to mathematical mastery (although it certainly is a better bet than the dreaded ‘formula triangle’ that I and many other have used in the past). FIFA is the scaffolding, not the finished product.

Genuine scientific understanding is the clock tower; FIFA is simply some temporary scaffolding that helps students get there.

We complete the FIFA-1-2-3 process as follows:


Conclusion: FIFA fixes it

The FIFA-system was born of the despair engendered when you mark a set of mock exam papers and the majority of pages are blank: students had not even attempted the calculation skills.

In my experience, FIFA fixes that — students are much more willing to start a calculation question. And that means that, even when they cannot successfully navigate to a ‘full mark’ conclusion, they gain at least some marks, and and one does not have to be a particularly perceptive scholar of the human heart to understand that gaining ‘some marks‘ is more motivating than ‘no marks‘.