How do we measure progress and progression?

Measuring what, with what, and what impact does it have on learning?

I was recently asked to feed into a review of support for musical progression in the UK, drawing on the work of the Musical Progressions Roundtable amongst other things. This is spread across three blog posts: one on sparking up engagement, one on supporting progression and this one on measuring progress.

First of all, I would separate providing support for progress and measuring progress, if for no other reason than that the measuring should be secondary, if integral, to the supporting. Some things are easier to measure than others (e.g. sight-reading accuracy versus creativity) but measurability should not have a disproportionate impact on support for progression. 

The reality is, though, that it does. It is harder to measure creativity, certainly numerically, than sight-reading accuracy, which means that it is harder to build measures, tests and assessments around creativity than sight-reading. This, in turn, tends to mean that tests, measures and assessments for sight-reading accuracy play a bigger role in assessment targets, in progression route goals, and in turn support for progression, than do those for creativity. It also tends to mean that supporting adults often find it harder to understand what creativity is and how to support it. At the end of the day, sight-reading gets more support than creativity. (Measurability isn't the only reason, but it’s not always a noticeable one either.)

But it's clearly nonsense to assert that if something is easy to measure then it must be very important for a learner’s development and that if it's hard to measure then it is not important - hence why measurement should be secondary to progress. Yet measurement isn't unimportant: it is generally integral to progress and progression. For example: learners measuring and assessing their achievements, teachers measuring their own effectiveness, institutions measuring their impact, governments supporting employers to compare one person to another for a particular role etc.

So to measure progress, you need to come back to the 'getting better at what?' question: what is being measured, by whom and why? Once these questions are answered, it's much easier to make sense of the many different ways of measuring different aspects of (musical) progress: graded music exams, sight-reading tests, learner perceptions, social impact scales, performance--audience reviews, social network sharing, record sales, gauges of self-satisfaction and self-expression, peer review etc. They all do different things, for different purposes, for different people, and with different journey destinations in mind.

The MPR's many ingredients required in an environment for progression, and the stakeholders required to build it, provide a good framework for reviewing progress measurements. For example, traditional music assessments are often very effective at measuring the development of appropriate skills and abilities, but less effective at measuring if a learner has identified and fulfilled personal goals, or social development, or self-assessment, or independent motivation. But there are plenty of other measurements for these things, that might be used alongside traditional assessments.

In summary:

  • Measuring and measurements are generally integral to support for progress but they should measure progress, not direct it
  • Some things are easier or more straightforward to measure than others but 'measurability' should not dictate the direction of a progression journey
  • It's important to ask what is being measured, by whom and why
  • And then to identify different measurements appropriate to the different ingredients in an environment for progression.

How do we  / our young people measure outcomes and attainment

Here are some thoughts from my perspective on a potential framework for mapping measurements, looking at what those measurements are trying to do. It’s a two-axis graph, with colour representing a third axis. The dots represent different measurements (they’re illustrative only in this picture).

In this framework:

Customisation is concerned with how appropriate the measurement is to the individual, customised to where they're going, to what they're trying to do, to where they are now, appropriate to what supporting adults can and want/need to do. e.g. 1-1 mentoring and personalised learning plans

Comparability is more concerned with uniformity and benchmarking: "how good am I", "how good is my child", "are they better than so-and-so?", "how do they stack up against the prevailing value set?", "which of these two people should I appoint?". e.g. national exams (to a large extent), competitions

Motivation is where the measurement itself, is attempting to motivate the learner. Measurements and assessments can be a driver for learning or a disincentive, but not all measurements recognise this. e.g. Children's University Passport.

Pragmatism is more concerned with what has to be done and what can feasibly be achieved given available time, teachers, money, interoperation with others etc. "How could we build a system for all of this?". This is often closely, although not always consciously, related to who is doing the assessment. 

Process is concerned with measuring what happens in time – skills, behaviours, actions, effort, learning etc., e.g. self-reflection, learning journals or shadowing.

Outcome is concerned with measuring what comes out of the process – outputs, creations, attainments, knowledge, e.g. record sales, A-Levels.

In ideal world, all good measurements would try to take on some of all of these - hence the graph. I'd suspect that in reality, many mainstream and formal measurements / assessments would end up in the bottom left quadrant whereas more informal ones would be in the top right. Certainly the top right is the region most closely allied with what the MPR has been working with (learner at the centre, importance of motivation, exceptional rather than best etc.)

The solution, in perhaps all cases, is to try to use a variety of measurements. Why? To take one example, a key component in any national assessment is comparability – enabling students to compare themselves to each other, enabling employers / teachers etc. to pick out the strongest or weakest, enabling parents to pick out a school. Because of this need for comparability, there inevitably tends to be an element of convergence: national assessments need to establish a single set of core criteria against which everyone is judged, or perhaps a variety of criteria which are held to be equivalent. This makes life difficult for things that rely on divergence and diversity, such as creativity and imagination or entreprise and initiative – all of which tend to revolve around making something new (even if it’s only new to one person). So in assessing for progression in the whole, it would be important to include other measurements alongside national assessments. For example, schools which focus only on GCSE and A-Level results (perhaps because of parental pressure, maintaining school performance measures, or lack of understanding of the alternatives) will risk skewing their students education towards the things that those assessments assess. Whereas an employer will typically look at a range of measurements in a potential employee: formal exam results, other qualifications (e.g. Duke of Edinburgh, Arts Award etc.), experience and work history and other ‘measurements’ – e.g. a candidate who has managed to set up a local community initiative.

What’s being measured?

If the above is about the purpose of the measurement – ‘why?’, then there's a second set of questions about ‘what’ the measurement is measuring, which you could categorise thus:

  • Attainment:  what can be done / what can you do / what do you know?
  • Achievement: what has been done?
  • Potential: what could be done / what should be done about it?
  • Progress: what positive change has been made / what distance has been travelled?
  • Effort: what work has been put in?
  • Ingenuity/enterprise: what solutions have been found to challenges given available resources?
  • Environment: what are the circumstances in which the learner is trying to progress?

Of course, most formal qualifications and measurements focus on attainment, and secondarily on achievement. Others, like Arts Award look also at progress, effort and ingenuity, whilst things like ArtsMark and, perhaps, teacher qualifications, look at environment. From the broader perspective that we've been looking at in the MPR, all seven are perhaps equally significant. It's interesting to wonder how the seven might interoperate as well, to give an overall measurement: for example, if the environment is poor, but the attainment is nationally average, you might expect the progress, effort and ingenuity to be high, in which case the learner is probably well on the way to being exceptional, despite the poor environment. (That’s certainly the calculation that, for example, universities try to make.)

How’s it being measured?

The nature and medium of assessments is also very diverse. Some measurements are summative (measuring at the end) – others are formative (along the way); some are individual – others collective; some are examined – others observed or reflected upon; some are formal – others informal. But the nature and medium of assessment is also very influential because some things are easier to measure than others in particular ways. For example, typically national summative assessments such as exams are good at measuring attainment (what you can do or know) but not so good at measuring achievement or effort. Things like creativity, or enterprise, or ingenuity, or collaboration, or leadership are more processes than they are products – it’s often the process that’s more important than the outcome of the process. So trying to measure these things with summative assessments and tests can be difficult, possibly counterproductive.

Why’s all this important?

Measurements have an enormous impact on how we learn, develop and progress and how we help (or hinder) other people to learn, develop and progress. For example:

  • when a learner is trying to achieve something, they reflect often or constantly on how they’re doing (self-assessment)
  • when a teacher helps a student, they’re constantly looking out for marginal and significant, short-term and long-term, immediate and gradual, relative and absolute improvements
  • when parents and peers respond to learners’ achievements, their responses will influence how the learner values that achievement and what they do with it next
  • when teachers and students are working towards some qualification or level, the content and criteria of the qualification influence what the teacher teaches and the student learns

Sometimes this impact goes unnoticed. That’s not necessarily a problem – everyone doesn’t have to be constantly transfixed on how everyone ismeasuring them, including themselves, and how they’re measuring others. But ultimately what’s being measured and how it’s measured will impact on what a learner learns. So if the array of measurements in place collectively leaves significant gaps or skews development in a particular direction that’s not optimal for the learner, then their ability to fulfill their potential – to be exceptional – is impeded. In other words, looking at measurements is a key part of building environments for progression. (And measuring them!)

Some example measurements and assessments

Here is a list of some of the measurements that have been cited during the MPR's work:

  • CYP releasing recordings of their work or distributing them through platforms like NUMU and getting feedback / ratings / sales
  • Programmes that include a significant amount of mentoring support for continual assessment/feedback throughout, like the work of South West Music School and Teenage Rampage
  • Young musicians working as young music leaders, and being able to see immediately and over the long term the impact of the musical/social skills on other young people
  • Arts Award, Children's University Passport
  • ArtsMark, Earlyarts mark (in development), Ofsted profiles
  • Graded Music Exams, theory exams, GCSEs, A-Levels, BTechs etc.
  • Teacher / other supporting adult feedback and the importance of it being well constructed and delivered
  • Competitions - school-based, local, regional, national, international (MfY, Teenage Rampage, Herts Songwriter 2012 etc.)
  • Facebook likes, Soundcloud listens, YouTube views etc.
  • Youth Music generic outcomes measuring frameworks
  • Institution and funder auditions (some of which are very much about Achievement, but others are more about Potential, Progress, Ingenuity etc.)
  • CYPs' own driving self-reflection and self-criticism, their peer review and appraisal, their 'measurement influences', such as tribes/fashions, peers, teachers, parents, etc.

Previous posts:

Further reading

And below is a list of some people who've focussed in their research work around particular areas of measurement:

 

 

Add comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.