top of page
  • penceza

An Evidence-Based Model of Memory to Help You Learn Better

One of the (many) unfortunate aspects of our K-12 public education system is that it is very unlikely that a student will graduate with any coherent understanding of, or conceptual knowledge about, how his or her mind works—i.e., how it is basically structured and functions, the mechanisms and processes it uses to regulate itself, etc.

 

Yes, I know, Common Core is supposed to make students “college and career ready.” And—I know—it would seem obvious to any outside observer with a modicum of intelligence that if this were indeed the educational system’s overarching desideratum that it would do more to produce learners that are knowledgeable about the processes involved in learning. But far too often it lets students fend for themselves in figuring out how to independently self-regulate their learning, how to efficiently and effectively study for different forms of exams, and how to metacognitively plan, monitor, and evaluate their writing when composing essays. This is doubly unfortunate given that our ever increasingly information-saturated workplaces will continue to require new employees to be adroit self-regulated learners.

 

Don’t necessarily blame the teachers, though. They’re hardly trained in the areas of educational, developmental, and cognitive psychological science, either. (But feel free to blame the university schools of education! Those are the true champions of pedagogical irrationality and curricular degeneracy.)

 

This article is not meant to be a declamatory diatribe against the public education system. Rather, it is intended to introduce a basic heuristic framework for understanding the mnemonic or memorial consequences of different ways of processing information. This model was first introduced in an infamous article written by Fergus Craik and Robert Lockhart and published in 1972 entitled, “Levels of Processing: A Framework for Memory Research.” (Feel free to reach out if you would like a copy! It’s relatively readable and only minimally technical.)

 

In this article, Craik and Lockhart do something that many creative thinkers do when the internal contradictions of a theory put it into something of a conceptual crisis, namely, they reinterpret and reconceptualize the empirical data that was supporting the old, problematic theory, and in doing so create a novel way of thinking about the phenomena that the old, problematic theory was supposed to conceptually represent and causally explain, but simply couldn’t.

 

Craik and Lockhart were arguing against what has come to be known as the modal model of memory, or the store-to-store transfer approach to understanding memory. This model is still taught in most educational and cognitive psychology courses, although the contemporary version is much more differentiated and sophisticated than the simplified version I provide below.

 

According to this modal model of memory, there are three fundamental memory stores or registers: (1) Sensory memory, which is pre-attentive, takes in a lot of information, but all of which is lost from conscious awareness within a couple of seconds unless it is attended to and repeated or rehearsed. (2) Short-term memory, which can hold anywhere between five to nine items at any given time, accepts a variety of codes (e.g., phonemic, audio, visual, semantic, etc.), and retains the store’s content for a longer duration than sensory memory. (3) Finally, Long-term memory, which is hypothesized to be limitless, coded primarily in semantic form, and very durable (some researchers argue we never forget information in this store; rather, we merely cannot access it because we don't have the correct retrieval cues).

 

Although this model, based as it is on a computer analogy (and we humans love to represent ourselves in 0s and 1s!), is intuitively plausible, it has many substantive problems. For example, researchers like George Miller claim that we can hold around seven items, give or take two, in our short-term store. However, cognitive load theorists have discovered that this number can be much lower if processing novel information, not to mention that items can be “chunked” and processed more efficiently, thereby eliminating the hypothetical barriers separating short-term from long-term memory. Additionally, researchers have hypothesized something called long-term working-memory to explain how skilled memory works for those with expertise in particular domains, such as worldclass chess players. This complicates the model even further.

 

Since our aim is to increase what researchers call our metamemory, or our awareness of the processes involved in memorizing information and monitoring what we know and have previously learned, what we want is a simple, parsimonious, yet nevertheless conceptually robust model of memory that we can practically adopt to determine the learning strategies we should use when processing novel learning content. Craik and Lockhart’s levels of processing model is this simpler and more useful model.

 

According to Craik and Lockhart, “the memory trace can be understood as a by-product of perceptual analysis and that the trace persistence is a positive function of the depth to which the stimulus has been analyzed.” Let’s unpack what this means.

 

First off, a memory trace (also known as an “engram”) is a term to describe how a memory is inscribed in actual brain matter (e.g., neurons). Perceptual analysis is the way in which subject essentially processes the to-be-learned content (the “stimulus”). So, what the authors are saying is that the subject will not produce a durable memory trace if he merely engages in shallow forms of analyzing or processing the to-be-learned stimulus. However, if the subject takes the stimulus (let’s say it’s a research article), triggers prior knowledge related to the content of the article, connects the novel information in the article with his prior learning (either by modifying, extending, or contradicting his original schemas), and perhaps produces some new ideas regarding how his new learning can be applied in a socially useful way, then it’s likely that he has elaborated his memory trace to the point that it will be accessible for future use.

 

I mentioned the word “elaboration.” This is an important word to remember, because “elaborating the stimulus” via elaborative processing or rehearsal strategies at a “deep” semantic (meaningful) level is what substantively strengthens the memory trace, makes it more durable, and allows it to be accessed in the future via a variety of retrieval cues and routes. “That’s nice and all,” you may say, “but what precisely is meant by ‘elaborative processing’?” To that I have a relatively simple answer: making meaningful connections. A general heuristic or rule of thumb is this: the more meaningful and interesting connections you make between your prior learning (“schemas”) and the novel to-be-learning stimulus, the deeper you will have processed the stimulus, and the more durable and persistent that memory trace will be. Strong and durable memory traces are a consequence of making meaningful connections, aka “elaborative” or “deep” encoding and processing.

 

This is just the beginning of much practically useful information in the Craik and Lockhart article. However, if you’re new to the subject of learning about learning, then I am afraid I may have already overwhelmed you! If you would like more information on the modal model, the levels of processing model, or metamemory more generally, please feel free to give me a call or text at 502-768-7846. Or you may shoot me an email at pence.z.a@gmail.com! Or visit my website at pencetutoring.com!




4 views0 comments

Kommentarer


bottom of page