Perverse Incentives: The KS2 Reading SAT and the Myth of Generic Reading Comprehension Skills

This is the third blog in the series on the Key Stage 2 Reading SAT. The first one outlining the series is here and the second on whether the KS2 Reading SAT is testing reading is here

The Reading SAT and Generic Reading Comprehension Skills 

Reading comprehension is ultimately making meaning of what is being read.

For most of my teaching career, reading was synonymous with teaching generic reading comprehension skills to pupils and expecting them to ‘master’ them. This is based on the idea that readers make meaning by applying a set of skills to a text.

In the past this was driven by the assessment focuses and the Reading SATs in KS1 and 2 were a reflection of these. 

While the KS1 Reading SAT is being replaced by the baseline,the KS2 Reading SAT will remain. Unreformed, it will continue to be the driving force behind the practice of teaching generic reading comprehension skills in primary schools.

There are three reasons why the KS2 Reading SAT can and does lead to poor practice in teaching reading:

1. It is underpinned by a belief that generic reading comprehension skills exist and they can be measured.

2. The weighting given to different assessment focuses in the past and reading strands now skews the teaching of reading toward generic reading comprehension skills.

3. The inclusion of unseen extracts with (varying levels of) unknown content is a direct result of the belief that application of reading skills – not the content of the text – results in comprehension. (This will be tackled in a later blog.)

 

The National Curriculum and Generic Reading Comprehension Skills

There are attainment targets in the National Curriculum that state pupils should be able to demonstrate their understanding of texts in particular ways such as making predictions from what is stated and implied. While their inclusion was, with hindsight, a mistake, they are not stated as skills to be mastered.1

As outlined in the previous blog, it is the Standard and Testing Agency (STA) that has selected attainment targets from the National Curriculum, interpreted them as skills to be measured and converted them into reading strands. The test design divides the reading strands into generic reading comprehension skills (reading strands 2a – 2e) and analysis of the text (reading strands 2f – 2h).

Weightings

The attainment targets in the National Curriculum have equal weighting but their reading strand equivalents do not.

Under the old testing system the weightings were more equally distributed between generic reading comprehension skills and analysis of the text.

 Average Percentage for Assessment Focuses (2003-2015)

AF2

Understand, describe, select or retrieve information, events or ideas from texts and use quotation and reference to text

AF3

Deduce, infer or interpret information, events or ideas from texts

AF4

Identify and comment on the structure and organisation of texts, including grammatical and presentational features at text level

AF5

Explain and comment on writers’ uses of language, including grammatical and literary features at word and sentence level

AF6

Identify and comment on writers’ purposes and viewpoints and the overall effect of the text on the reader

AF7

Relate texts to their social, cultural and historical contexts and literary traditions

23% 40% 9% 12% 13% 3%
63% 37%

 

Unlike the reading strands, the AFs relating to generic reading comprehension skills, consisted of several skills that needed to be taught. AF3, which included deduction and inference questions, was given the greatest weighting. These were assumed to be the most important ‘skills’ good readers demonstrated.

The combination of AF2 and AF3 were enough to enable a pupil to gain a Level 4 in the old assessment system. This is not to say the other assessment focuses were not taught, but, in my experience, the weighting impacted on the time devoted to each in guided reading lessons. It also informed the content of interventions for those struggling to read. 

Average Percentage for Reading Strands (2016-2019)

2a

Give/explain the meaning of words in context.

2b

Retrieve and record information /identify key details from fiction and non-fiction.

2c

Summarise main ideas from more than one paragraph.

2d

Make inferences from the text/explain and justify inferences with evidence from the text.

2e

Predict what might happen from details stated and implied.

2f

Identify/explain how information/ narrative content is related and contributes to meaning as a whole.

2g

Identify/explain how meaning is enhanced through choice of words and phrases.

2h

Make comparisons within the text.

18% 31.5% 3.5% 40% 1.5% 1% 3% 1.5%
94.5% 5.5%

 

The main difference between the old and new Reading SAT is that the current one is even more of a generic reading skills test than the previous one. Again inference questions are given the greatest weighting. The reading strands are meaningless due to the shoehorning of questions from the test item bank – thus half the vocabulary questions involve deduction or inference (this will be examined in the next blog).

Typically, the weightings of the reading strands drive what is considered to be important when teaching reading comprehension throughout Key Stage 2. 

Moreover, in order to track progress, SAT style tests with similar weightings are used across other year groups. This shapes the priorities in teaching reading comprehension in all year groups despite the research and evidence in this field. Thus, schools are inexorably wedded to the teaching of generic reading comprehension skills as a direct result of the Reading SAT.

But why are generic reading comprehension skills being taught and assessed?

The Myth of Generic Reading Comprehension Skills

The belief that generic reading comprehension skills exist first arose in the 1950s.2  

Identifying what ‘good’ readers could do which ‘poor’ readers couldn’t does seem like a logical starting point to bridge the gap. Claims about what good readers could do include: 

  • They make predictions as they read.
  • They deduce the meaning of unknown words either by word substitution or using context cues.
  • They are able to make inferences.

But do they?

Prediction

The idea that good readers predict or guess what might happen next appears to have no research base at all and only appeared as a skill in the 1980s.3

Deduction

This ‘skill’ is the epitome of a lethal mutation. In an era which prized pupils discovering learning for themselves above explicit instruction, the idea of pupils deducing unknown words was a seductive one.

Good readers can deduce/infer around 15% of the unknown words they encounter.4 Despite the low percentage, the increase in vocabulary acquisition can be substantial depending on the number of books a child reads. 

It does not, however, constitute a skill good readers have mastered and can apply regardless of the word or text they encounter. 

This ability to deduce the meaning of some unknown words mutated into teaching strategies that would lead to the acquiring of the ‘skill’ of deduction.

Word Substitution

The idea that good readers substitute words for unknown words, like predicting, appears as a reading strategy but, in so far as my investigation took me, lacks an evidence base. It also doesn’t seem to bear any resemblance to reality. 

As skilled readers, we do not waste time substituting unknown words with alternatives, we look the word up! It’s only in a test situation that substitution might be a useful. Even then its use is limited; it depends on extracting meaning from the rest of the sentence and having the vocabulary knowledge that would enable a substitution.

In normal reading circumstances, substituting words is a poor strategy to teach struggling readers. It stalls the reader and wastes their time. It can lead to wild guessing which is counter-productive to the goal of making meaning from what they are reading. Worse still, it can embed the idea that reading is guessing in the very pupils who are struggling the most. 

Reading on, asking the meaning of the word or looking it up are clearly better strategies. 

Context Clues

Good readers can use context clues such as definitions, examples and synonyms to deduce unknown word meanings.5 But only if they are included, which is not always the case.

All writers make assumptions about the vocabulary and background knowledge the reader possesses. Most of the time writers don’t write with the purpose of supporting the reader to deduce the meaning of potentially unknown words. Even when they do, for example by including glossaries, the exact words that are unknown to the reader will vary from person to person.

Whether the context clues are incidental or intentional, using them involves having the vocabulary and background knowledge to understand the context clue in the first place. 

The evidence shows teaching poor comprehenders to use context cues to understand unknown words does not result in increased vocabulary acquisition. This is due to two factors: they lack vocabulary and because their understanding of the vocabulary they have is less well developed.6

The truth is that the vast majority of the time, good readers, like poor readers, don’t deduce the meaning of unknown words either! I doubt anyone would advocate turning this into a skill and teaching it.

Inference

The teaching of inference as a skill is also questionable.

Good readers can make inferences if they have made meaning from the sentences written and if they have the background knowledge to be able to fill in what the writer has omitted. Thus it is not the skill but the knowledge that needs to be taught.

Knowledge not Skills

Once we stop thinking of readers in absolute categories of good and poor readers, the comprehension process makes more sense. ‘Good’ readers understand more of what they are reading more of the time. They, like poor readers, will have vocabulary and knowledge gaps that need addressing too.

Ultimately, comprehension is about making meaning from the writing being read, which is determined by the reader’s pre-existing vocabulary and knowledge, not the ability to apply a set of skills to a text. 

At best, what is thought of as generic reading comprehension skills should just be seen as types of questions, some of which can be used to ascertain if a pupil comprehends what they’ve read.7 At worst, generic reading comprehension skills are a barrier to the effective teaching of reading.

Yet, many will claim, correctly, that teaching generic reading comprehension skills does lead to improvements in reading comprehension tests

Generic Reading Comprehension Skills and Test Scores

If the skills are actually types of questions then clearly teaching them would improve scores in a test that contains these questions and measures the ability to answer them. 

Teaching generic reading comprehension skills can improve test scores in the following ways:

1) Pupils are taught to read the question carefully to avoid mistakes such as ticking two boxes instead of one.

2) Teaching involves examining the vocabulary and concepts contained in questions. If a pupil does not know what ‘retrieve’ means, has no concept of what ‘a reason’ is or does understand what a ‘prediction’ involves, then teaching them this means they can at least have a go at answering the question.

3) It gives children exposure to the kinds of answers they are expected to give and what to write e.g. the different formats such as short and long answers, multiple choice, numbering when ordering or sequencing. Again the outcome is that pupils are more likely to attempt to answer the questions. 

These gains in tests should not, however, be mistaken for improvements in comprehension. Teaching generic reading comprehension skills in reading lessons is just teaching to the test. 

Such lessons can include some benefits such as exposure to reading different texts and therefore new vocabulary. However, the extent to which this leads to long term vocabulary acquisition, development of word knowledge or improved comprehension when reading other texts is severely limited because the focus remains on acquiring and practising skills, not knowledge.

Once the issue of knowing the question type and answer format are put to one side, the inability of pupils to answer these questions is dependent on vocabulary and background or domain knowledge needed to comprehend the text.8 Repeated practice does not lead to gains in test scores or comprehension.

Measuring Question Types is not Measuring Comprehension

We cannot equate the ability to answer such questions with reading comprehension because it is only one way of assessing comprehension. It is also possible to assess comprehension by asking pupils to draw and/or annotate a picture or diagram for example. 

Measuring the ability of pupils to answer different types of questions, as the KS2 Reading SAT does, is measuring one of the means of assessing comprehension, not an assessment of comprehension itself. 

Reading comprehension is a test of knowledge not of skills. Pupils’ ability to demonstrate their comprehension depends on their ability to understand the meaning of what they read. This is, in turn, dependent on the vocabulary, domain knowledge and background knowledge of the content that they already possess, not their ability to apply generic skills to an unseen text.

In the next blog I will delve into the ‘skills’/question types the KS2 Reading SAT is actually testing (spoiler – they don’t match the reading strands!) and comparing the idea of generic reading comprehension skills (which don’t improve comprehension) and reading strategies which might be of some, albeit limited, use when teaching reading. 

 

1 In my opinion, The National Curriculum programmes of study for English need to be reframed into basic English which is required to become literate and then English Literature and Language. This distinction is important in primary as it separates what is generic and what is subject specific. This would allow for greater alignment with secondary.

2  https://shanahanonliteracy.com/blog/comprehension-skills-or-strategies-is-there-a-difference-and-does-it-matter 

3 The Reading Ape does a thorough analysis of the research field in this blog: https://www.thereadingape.com/single-post/2020/05/02/I-predicta-riot

4 Swanborn, M.S.L. and de Glopper, K. (1999) Incidental Word Learning While Reading: A Meta-Analysis. https://journals.sagepub.com/doi/10.3102/00346543069003261

5 Dechant, E. (1991) Understanding and Teaching Reading: An Interactive Model: An Interactive Approach. New Jersey, Routledge.

6 Shelfbine, J.L.(1990) Student Factors Related To Variability In Learning Word Meanings From Context https://journals.sagepub.com/doi/10.1080/10862969009547695

7 https://shanahanonliteracy.com/blog/comprehension-skills-or-strategies-is-there-a-difference-and-does-it-matter

8 The role of knowledge in comprehension has been highlighted by Hirsch, E.D., (2016), Why Knowledge Matters: Rescuing Our Children from Failed Educational Theories. Massachusetts: Harvard Education Press and Willingham, D.T., (2009), Why Don’t Students Like School? A Cognitive Scientist Answers Questions About How the Mind Works and What It Means for Your Classroom. San Francisco: Jossey-Bass, among others.