Reading Strands are Meaningless

This is the fourth blog in the series – the first covers the overall issues with the KS2 Reading SAT, the second whether it tests reading and the third how it embeds the teaching of generic comprehension reading skills.

In this blog, I will examine what the KS2 Reading SAT is testing and whether this is useful to support the teaching of reading.

The Reading Strands

Here’s a short quiz to begin with.

If you are struggling with spotting the difference, it’s because there really isn’t one. 

The reading strands are reinterpretations of the attainments targets in the National Curriculum (see blog 2). This enabled the Standard and Testing Agency to continue using their test item database which they had been building up since 2012.

I’ll hold my hands up, both as a SATs marker and a teacher. I knew the questions were familiar and noticed the anomalies with question stems but didn’t look at the coherence of the reading strands overall.

What is the KS2 Reading SAT really measuring?

Analysing the questions from the SAT from 2016 to 2019 it’s clear that they come mainly from the old Assessment Focuses 2 and 3 from the previous testing regime.1 Few new types of questions have been formulated to test the newer parts of the reinterpreted attainment targets.2

2a – Vocabulary:

The vocabulary questions involve retrieval, inference or deduction. The concern here is the deduction questions. Contextual clues were only available for 44% of the deduction questions asked. Clearly they are not constructed with consideration as to whether deduction is possible. 

It is not clear how data in this strand could be used by teachers when only 9% of the questions require knowledge from the National Curriculum. Also there is a focus on asking about more obscure meanings of vocabulary (e.g. spat) or for words they are unlikely to have encountered (e.g. illegal, recesses). The only purpose for this is to see who can be counted as a greater depth reader – a meaningless term which I will look at in my next blog. 

2b – Retrieval:

Retrieval, in so far as it is a skill, simply involves knowing that the text contains the answers and extracting what is needed to answer a question. This takes little time to teach (my Year 2’s picked this up in the space of 4-6 weeks and over the course of the year they didn’t need reminding to use the text to answer questions). Beyond that the ability to answer retrieval questions depends on whether the pupil has understood the question and made meaning from the text. This is all dependent on knowledge. 

To be fair this is the only strand that has some coherence in terms of the questions asked. 87% of the questions are retrieval while the rest are inference. 34% of the questions involve some paraphrasing of the sentence to support pupils to find the answer. While paraphrased questions have their issues (easier for pupils to retrieve the word/phrase for the answer), the use of synonyms used to avoid paraphrasing can also cause issues. It is difficult to know in such cases whether it was the wording of the question that was not understood or the text.

2c – Summary:

This strand is a missed opportunity. Instead of asking the children to summarise in their own words, they are asked to select the correct summary. This has only occurred twice. Puzzling there are more sequencing questions – which are actually retrieval questions – in this strand.

2d – Inference:

As discussed in the last blog, inference is not a skill that can be applied to any text, it requires background and domain specific knowledge. 

Writers don’t write for the convenience of teachers or for test creators who want fortyish percent of the marks for an inference reading strand. 

This insistence on weighting would appear to be the reason for reinterpreting the National Curriculum attainment target to include finding evidence for inferences that are made in the question. These are not inference questions and are no different to non-paraphrased questions for retrieval. While there is merit in finding evidence from the text to support an assertion, the ability to do so for an inference already made tells us nothing about whether a pupil can make an inference.

Only 43% of the questions in this strand give any information about the pupils ability to make inferences. The rest are retrieval questions. The inclusion of fact or opinion questions in this category makes little sense. This is about knowledge and knowing 

Why this strand can’t just be weighted less and more coherent is a question for the design makers. I can only guess that the need to make the test appear challenging and therefore in line with the expectations of the National Curriculum, they feel they have to maintain this strand as the one with the greatest weighting. 

2e to 2h:

2e – Prediction: Only one prediction question has ever been asked. There is no merit in testing it as a skill (see blog 3).

2f – Meaning of the Text: These are ‘understand’ questions relating to the whole text. 

2g – Choice of Language: These are either retrieval or inference questions. The former have involved finding evidence and the latter explaining the meaning of quotes.

2h – Comparison within Texts:  To say this is making comparisons within the text is a stretch and while the questions do provide useful information for retrieval and inference – they don’t really provide any information about the comparison within longer texts or between them which was the aim of the attainment target in the National Curriculum.

Breakdown of the strands by Generic Reading Comprehension Skills (2016-2019):

 What this data highlights is that the reading strands are meaningless. Extracting data from the KS2 Reading SAT or indeed Reading style SATs for other year groups requires deeper analysis of the questions and answers to be of any use to teachers. 

Implications for Teaching Reading

There are no generic reading comprehension skills and organising teaching around reading strands is pointless. The focus should be on teaching pupils the knowledge they need to enable them to extract meaning from the text. 

In terms of reading lessons, there is evidence that there is some benefit to teaching reading strategies – e.g. skimming, re-reading, summarising to support pupils comprehension.3 But as Willingham points out, even the reading strategies that do work to support comprehension are just a ‘bag of tricks’. The one time boost they deliver is not a substitute for teaching the knowledge pupils require to extract meaning from the text.4

The questions stems and answer formats themselves are useful and can be adapted for use in general teaching. 

While we have the KS2 Reading SAT I think that we need to adopt a pragmatic stance to it.

The following gives a breakdown of questions and answers types.

The green highlights those question and answer formats that come up the most frequently and give teachers the most information. The orange are encountered less frequently but are useful to include from Year 5 onward as children will have the knowledge to be able to do this usefully. The light red do not require much focus and should be left to test practice. 

The bright red is purely for test strategy of last resort. While many teachers would ask questions and point out contextual clues that would help to infer an unknown word as part of teaching a text, there is little to be gained from teaching it as a strategy or a skill. The pupils who are picking up on this, do so without much teaching and those who aren’t lack knowledge (see blog 3). 

Teaching Reading

We have removed the guessing component from word reading. It’s time to do this with reading comprehension too. The following diagram is an attempt to represent what the focus for each stage of teaching reading should be:

We need to check progress at each stage and not assume. Years 2 to 3 are really crucial for many children. This is the point where they are expected to move from phonics to comprehension. One of the main reasons many struggle is due to lack of reading fluency. 

This model has implications for internal assessment too. 

  • What aspect of reading are we testing and when? 
  • Are we tracking progress using SAT style reading tests across year groups? 
  • Can teachers use test results to improve teaching or decide on suitable interventions for pupils?
  • When we know that background and domain knowledge matters then what is the purpose of using tests containing random extracts unrelated to the National Curriculum?

The last question is going to be the focus of the next blog – the use of unseen extracts with mostly unknown content in reading assessments.

 

References:

1. Assessment Focus 2 (understand, describe, select or retrieve information, events or ideas from texts and use quotation and reference to text) and Assessment Focus 3 (deduce, infer or interpret information, events or ideas from texts)

2. My analysis is qualitative and will suffer to some degree from subjectivity but I am happy to share the data analysis of the questions. While select was a category, in effect it broke down into retrieval and inference so I assigned those categories.

3. Quigley, A, (2020), The Reading Gap, Oxon, Routledge.

4. Willingham, D.T (2006/2007) The Usefulness of Brief Instruction in Reading Comprehension Strategies, https://www.aft.org/sites/default/files/periodicals/CogSci.pdf