NAPLAN is driving our students backwards
- Date: May 15, 2013
- Peter Job
The ranking system does more harm to learning than good.
When results are finally released, however, teachers and schools know from experience what to expect. Schools will be compared with each other by local media, some lauded as successes and others derided as failures.
Competition between jurisdictions will also be evident, with state and territory results compared, discussed and ranked, conjectures and theories put forward to explain different levels of achievement. Students will take home reports to allow parents, supposedly, to monitor their child’s progress in relation to their peers.
In light of this, it is interesting to compare these results with another prominent test of educational achievement, the Program for International Student Assessment (PISA) tests of reading, mathematics and science for 15-year-olds run every three years by the OECD. Comparative results for states and territories are markedly different.
Victoria, which ranked second after the ACT in NAPLAN Year 9 reading in 2009 ranked only fifth in PISA. Queensland, which ranked a lowly seventh place for Year 9 NAPLAN ranked a more impressive third in PISA that year.
Of the two tests, there is good reason to believe PISA is the more reliable. As a sample test rather than a full cohort test, it is not subject to distortions brought about by accountability and teaching to the test.
Yet, to a large extent, this is to miss the point. A key rationale of NAPLAN has always been so-called transparency, with parents encouraged to judge schools by their comparative NAPLAN results posted on the My School website and the test supposedly used to identify successful and ”failing” schools. Yet even states and territories display markedly different results in different tests of the same measure of the same age group held in the same year.
Studies in the US and the UK, both of which have conducted full cohort accountability testing for many years longer than Australia, have also indicated limitations in the use of testing for school comparisons or improvement. A study by the University of California, for example, found that test score volatility made it very difficult to accurately compare schools and that this results in ”some schools being recognised as outstanding and other schools as in need of improvement simply as the result of random fluctuations”.
In the UK, a 2010 parliamentary report noted that the Achievement and Attainment Tables of school test results, the UK equivalent of the My School website, had ”inherent methodological and statistical problems”, which led parents to ”interpret the data presented without taking into account their inherent flaws”. As a result, schools felt constrained to teach to the test, narrow curriculum and push students towards ”easier” qualifications in order to maximise performance data.
In Australia, Melbourne University academic Professor Margaret Wu has also noted the limitations of NAPLAN as a test of individual student achievement or progress. The magnitude of measurement error in a test conducted on one day is such that not only is it a problematic measure of individual student achievement, but when this uncertainty is compounded over two tests a fall or rise in relation to peer test performance could well indicate simple statistical uncertainty or particular circumstances on test days rather than an actual change in achievement.
Parents should be aware that a quality report by a professional teacher encompassing a range of measures over time, preferably accompanied by a face-to-face discussion, is a far better indicator of student capabilities than a NAPLAN report.
Evidence of the damage of test-based accountability regimes is clear in the US and the UK. Subjects not tested, such as history and art, are marginalised and even those tested narrowed to improve test results. There is also evidence that such regimes create incentives to exclude students who some schools perceive as liabilities, further increasing educational segregation and inequity.
Here in Australia, NAPLAN is increasingly unpopular with teachers, creating as it does an incentive to value test results over the long-term educational wellbeing of our students.
High standards of literacy and numeracy are a fundamental responsibility of schools and teachers. However, there is little evidence that testing accountability regimes such as NAPLAN improve these areas.
On the contrary, countries that rank above us in PISA, such as Finland and Canada, take a very different approach, emphasising a broad creative curriculum, equity and a high degree of teacher trust rather than the test-based model prevalent in the US and the UK. Both the latter countries fall well below us in PISA, and it is ironic that they, rather than those nations that do better, have served as models for change here.
Supporters of NAPLAN laud such an approach as ”evidence based”, providing ”hard” data to monitor achievement and assist in the preparations of road maps for improvement. The evidence simply does not support these claims.
NAPLAN is driving us backwards, not forwards.
Peter Job is an English and humanities teacher at Dandenong High School. His master’s thesis was National Benchmark Testing, League Tables and Media Reporting of Schools.