Using objectives' assessment data to measure pupil progress

A common question we get asked here at Insight is: can assessment data derived from tracking learning objectives be used to measure pupil progress? This is a fair question. Having spent time inputting assessments against the various learning objectives in Insight, it's understandable that teachers would want to use the output to measure how far the pupil has come in their learning. Unfortunately, as is always the case with progress measures, the answer is not straightforward.

Summary data

As teachers input their assessments against the learning objectives - usually entering a number 0, 1, 2, or 3 to denote a pupil's depth of understanding - Insight can calculate figures that summarise the pupil's overall security in the subject.

If you don't have any calculations in your objective grids, please contact the support team.

Options include:

  • Average depth: the average of the scores assigned to each objective, ignoring those that have not been assessed.
  • % secured: the percentage of the total number of objectives that the pupil has secured (i.e. the denominator includes both assessed and non-assessed objectives). This figure starts at 0% and can rise to 100% by the end of the year or period of learning.
  • % secured to date: the percentage of assessed objectives that the pupil has secured so far (i.e. the denominator includes assessed objectives only). This figure can be 100% at any point in time if the pupil has secured everything taught so far and therefore has no gaps in learning.
  • Score: the sum of the scores assigned to each objective
  • Fraction: the number of objectives secured out of the total number, e.g. 12/30.

Commonly, schools want to use these figures to measure progress. But can it be done and what are the issues?

Expected progress

Insight's various reports can certainly use the objectives' assessment summary data to generate numbers that, on the face of it, act as a measure of progress. Progress overviews, and the progress options in tables and headline reports, will calculate positive and negative figures that demonstrate that the summary data has gone up or down between two assessment points and this can be interpreted as good or poor progress.

The main issue is: what figure do we enter into the expected progress box on these reports?

Expected progress is the figure against which each pupil's change in score will be compared. In the old levels system, our expectation may have been for pupil's to make three points of progress per year, so we would enter '3' into the expected progress box. With the now more popular PITA (i.e. flat) system, where pupils commonly remain in the same band over time, we are more likely to enter a '0' into the box, and this is the default setting in Insight. And with standardised scores or reading ages, we should consider entering a range of figures to allow for the 'noisiness' of test scores. All these options are explained in this help guide.

But when it comes to using summary data calculated from the assessments of objectives, what is our expected rate of progress? As stated above, there is sadly no easy answer.

Average depth

The best option is probably to stick with the default expected progress rate of '0'. This will then identify those pupils that have seen an increase in their average depth scores (positive scores colour coded blue), those whose scores have stayed the same (zero scores colour coded green), and those whose scores have decreased (negative scores colour coded red). This is, however, somewhat simplistic because in some cases - i.e. those pupils that are secure in their learning and have average depth scores of 2 - we do indeed want the scores to at least remain the same over time (expected progress = 0). But in the cases of those pupils that are less secure in their learning, whose average depth scores are less than 2, we would expect them to increase. There is no 'one-size-fits-all' expected rate of progress.

Further complicating matters is the fact that many learning objectives - especially in reading and writing - will not be secured until the end of the year, so average depth scores may remain low until the summer term. This makes it even harder to use the average depth score to measure progress and compare subjects.

Percentage of objectives secured

It is tempting to assume a certain percentage of objectives that each pupil should secure each term and treat that figure as our expected rate of progress. Indeed, this was common in systems in the immediate aftermath of the removal of levels where pupils were deemed to be making expected progress if they had secured 33% of the objectives by Christmas and 67% by Easter. But the reality is that no curriculum is designed to be delivered in neat one third blocks each term and it's because of this that we at Insight resisted going down the route of automatically assigning pupils into assessment bands as teachers entered the data for each learning objective. Again, you can use this data to measure progress but coming up with an expected rate of progress - an amount that each pupil's percentage should increase by - is not straightforward and is likely to differ from pupil to pupil. And, of course, when a pupil starts the next year - depending on set up options - they go back 0%, which makes it look like they gone backwards.

Percentage of objectives secured to date

Because this figure only takes account of the objectives assessed so far and ignores those that have not yet been assessed, it theoretically avoids the problems of the previous option. Here, we expect a pupil to be secure in everything so we can assume the expect rate of progress to be 0: if they are secure in everything at one point then we expect them to maintain that level of understanding at the next point; 100% should remain 100%. But what of those pupils that are not secure in everything, that perhaps are only secure in half the content that has been covered so far. In these cases, as with low average depth scores, we would expect the numbers to go up not stay the same. Again, there is no common, expected rate that can be applied to all pupils; it depends on their start point.

Conclusion

Measuring progress is tricky and is best done using some form of standardised assessment that is designed for the purpose. As outlined in the examples above, attempting to measure progress using data generated from the assessment of learning objectives is particularly problematic because there is no strict rule on what constitutes the expected rate of progress. For one pupil it may be to maintain the same average depth score or percentage over time, whereas for others we may expect the score or percentage to increase by a smaller or larger amount. When using these types of data to measure progress, our best option is to assume an expected rate of 0 - no change - and that will at least separate the group into those whose scores have gone down, stayed the same, or increased. This then acts as a triage process beyond which teachers can unpick the details in their pupil progress meetings, aware of the fact that pupils with the same progress scores can be on different journeys.

Alternatively, simply present the percentages of objectives secured each term in columns in a table and use those figures to assess progress rather than focusing on the difference between two numbers. This will allow you to easily differentiate between those pupils that have the same progress scores but contrasting learning journeys, e.g. those that have consistently secured all the objectives from those that have maintained just half of them. In doing so we are better equipped to assess each pupil's progress on a case-by-case basis.


How did we do?


Powered by HelpDocs (opens in a new tab)