Welcome to Memory Mondays, where I read a textbook on memory and talk about what I Iearned. If you like your cognitive psychology neatly summarized, with a healthy dose of unnecessary commentary and excessive amounts of semi-colons, this is the series for you!
For those who have read the work of Daniel Willingham, follow the Learning Scientists, and are active on certain corners of edu-twitter, this chapter’s biggest take-aways will be old news. And yet, revisiting them is harmless, and the chapter provides much else to pique the interest of teachers familiar with some cognitive psychology.
So let’s get started.
Time on Task
Let’s start with a factor in learning that is patently obvious to any teacher: time. Mainly, that there is not enough of it, and this matters because students can learn more if we had more of it. In this context, the total time hypothesis does not strike as particularly revolutionary. Basically, it proposes that the amount learned is a function of the time spent learning it. This is supported by Ericsson’s studies of experts, and the amount of time they have spent practicing to achieve this level of expertise.
However, it is not this simple. More practice will eventually lead to a plateau. Deliberate practice, where areas of weakness are targeted for improvement, is more effective and better supports continuous improvement. Ultimately, with limited time with our students, we want value for time spent. What practices can we use to ensure students are maximizing their learning in the time we have?
Retrieval Practice and Distributed Practice
Ah yes, the two big ones. Retrieval practice (a.k.a. practice testing) and distributed practice (spreading practice out over time) are, in my mind, the two best of the six strategies for effective learning from the Learning Scientists, based on their effectiveness and ease of implementation. A monograph on study strategies backs this up; they work, and require little training to use.
I can vouch for these strategies from personal experience. I have memorized a few poems, as a strategy for dealing with anxiety (having anxious thoughts? replace with poem), and for the delight of having some of the greatest verses of the English language floating around in my head. I know there are techniques of visual imagery and mind palaces I can use, but I prefer straightforward, brute-force memorization. It is through a mix of distributed and retrieval practice (aided by flashcards) that I memorized “The Cremation of Sam McGee,” by Robert Service, a 60-line poem I first heard as a child, read to me from a picture book illustrated by Ted Harrison. The image (spoiler alert) of McGee gazing out of the crematorium stuck with me over the years; there is probably something I’ll learn about memory later that’ll explain that.
So, what I am trying to say is, distributed practice and retrieval practice work, backed up by research studies and my own experience.
Combining the two practices together is called expanding retrieval. Items are tested after a short delay, and this delay increases with each practice. The optimum delay is 10-20% of test delay; so, “for testing after 10 days, there should be a delay of 1 or 2 days between trials, whereas for a 6-month test delay, a 20-day interval between learning trials is best” (pg.152). I’m not quite organized enough as a teacher to use the optimum delay; my bellringer review tends to be a bit scattered, but the goal is to one day have my reviews as well-thought out as the initial learning experiences.
In this expanding retrieval, feedback is important too, so incorrect information is not being reinforced. It’s okay if it is somewhat delayed, so don’t stress about getting those quizzes back super quick. Though soon would be good.
Pairing retrieval practice with repetition is important, as mere repeated exposure is not enough. The penny experiment is an example of this; though Americans see the coin every day, they cannot accurately identify what it looks like when given choices of different penny designs. Just because we see something all the time, doesn’t mean we know it.
One main appeal of these two strategies is the potential for classroom application. See this post from Blake Harvard on how he did it.
Sleep and the Consolidation of Learning
Sleep is also very important to learning, but unlike implementing retrieval practice and distributed practice, the amount of sleep our students get is not within our control. So why is this important information? Maybe explaining the importance to students will help encourage them to get the shut-eye they need.
Sleep is central for the consolidation of memory, where memories become more firmly established in the brain. Consolidation is divided into two processes. Synaptic consolidation happens over a shorter time span, about 24 hours. Here, I turn to Cognitive Psychology, 4th ed., by Goldstein, to supplement the explanation in the memory text.
Let’s start with some basic brain biology. Our brain is made up of cells called neurons; the gaps between those cells are called synapses; these cells communicate using electrical signals called potentials. I’ve cut out a lot of detail there, but that’s the gist. Synaptic consolidation happens at this cellular level, through long-term potentiation and the formation of cell assemblies.
Long-term potentiation is a neuropsychological mechanism where “repeated electrical stimulation of an axonal pathway led to a long-term increase in the size of the potentials generated by the neurons beyond the synapse” (pg. 174 in Memory). This can lead to the creation of cell assemblies, which were proposed by Hebb in 1949; “neurons that fire together wire together” (pg. 173 in Memory), in groups which form the basis of long-term learning. Short-term memory is based on activation of existing cell assemblies.
Sleep plays a role in building these connections and assemblies. There is evidence from studies on rats, birds and humans that patterns of activation from the daytime are replicated during sleep, reinforcing those connections.
These processes are selective, and the brain favours more important memories. How does it decide what is important? Memories with a strong emotional association will be favoured, or information we need again and again, such as our phone number. As teachers, we can revisit material and make use of expanding retrieval, to help the brain favour lesson content during sleep consolidation.
The second process of consolidation is systems. It operates on a different time frame from synaptic consolidation, over months or even years, and on a different level, not changing synapses but re-organizing neural circuits. When information first comes into the brain, the hippocampus (the part of the brain important for memory formation) coordinates activation of different areas of the cortex (the thin layer of tissue that covers the brain). When those same areas are reactivated later, in remembering, these areas of the cortex become directly connected and the role of the hippocampus is reduced.
This decreased response of the hippocampus is important in the semanticization of memory, the change of memories from episodic to semantic, from life memories to knowledge. Say students have a lesson on the Shang dynasty. You split them into groups, and each group learns about one aspect of the Shang; then you shuffle the groups and have them share what they learned, a standard jigsaw format. A week later, you bring up the Shang again, and face a room of blank faces. “Used oracle bones to predict the future?” Nothing. “Artefacts were discovered at the city of Anyang?” Nothing. “You worked in a group, and then walked around the room, sharing what you learned.” Some recognition and a chorus of “oh…” fills the room. They remember.
I have referred to lesson activities to help students remember lesson content before, but ultimately we want the memory of what was learned to be separate from how it was learned, for them to recall meaning and not just activities. This is what makes systems consolidation important.
This part I’m going out on a limb a bit for. Systems consolidation develops connections between different areas of the cortex, whereas previously they would not be connected and coordination would happen through the hippocampus. I think this would help make knowledge more flexible, as it networks with other memories that are also linked in those areas of the cortex, and makes schema richer. I don’t really have anything to back this up, though, other than that it makes sense.
Considering the time frame of systems consolidation, and its value, I believe it supports revisiting old content from even months before. Reactivate old memories and help build those cortical connections.
Implicit learning is interesting; we can learn things without being conscious that we learned them, and the evidence comes from a change in behaviour. While there are a range of examples that fall under this category, what they have in common in a lack of involvement of episodic memory. Beyond that, they use different systems and different areas of the brain, so implicit learning is not a coherent group or its own system, but just a way to categorize different phenomenon.
One example of implicit learning is procedural, and there is interesting research done on this with amnesia patients. Procedural learning is that of a skill, one we often use without being able to explain how we do it, like catching a ball or riding a bike. Goldstein in Cognitive Psychology describes a study where a patient with amnesia practiced mirror drawing (copying a picture seen in a mirror). Even though the patient could not remember practicing, he got better at the skill over time.
Learning grammar is also an example of implicit learning, especially of our native languages. We follow complex rules, without being able to articulate what those rules are. The example of adjective order comes to mind (well, specifically, my friend Kate’s mind). English speakers, when putting more than one adjective in front of a noun, always used a specific order, but until I read an article on it, I did not know such a rule existed.
We can also see implicit learning in the learning of artificial grammars, where certain sequences of letters are permitted and others are not (e.g. TTVRX). People can learn these grammars, and identify correct and incorrect sequences, without being able to explain the rules they were following. However, the question of whether we can learn a foreign language through immersion, and its grammar implicitly, is still debated.
This chapter on learning was an eclectic mix of various topics, but all with a common point: learning. Humans need learning to survive. Some other species come into the world pre-programmed with what they need to know; an ant knows what to do, without being told. Humans need to learn, and this complex topic will remain essential for our understanding, as long as that continues to be the case.
Next week: episodic learning