Introduction

This paper reviews the use of tests of Australian school students’ abilities, concentrating on literacy and numeracy. Since 2008, nationwide annual tests held in May have required the vast majority of students in years 3, 5, 7 and 9 to participate, as summarised below.

This article is dated early October 2015. Towards the end of the same month, President Barack Obama wrote an open letter to American parents and teachers urging them to go easier on tests. This is essential reading, and listening as the letter ends with an audio-visual presentation of the president. Please read and listen — Australia is in a similar predicament!

The idea of using high-stakes tests goes back at least to 1957 in America, when apprehension over US education standards escalated after the Soviet Union’s launch of Sputnik. This concern over a country’s competitiveness spread internationally, especially since 2000 when the triennial OECD Program for International Student Assessment (PISA) was launched to test 15-year-old school pupils’ scholastic performance on mathematics, science and reading. 1

America in the new century has led other nations into “an era of strong support for public policies that use high-stakes tests to change the behaviour of teachers and students in desirable ways. But the use of high-stakes tests is not new, and their effects are not always desirable. ‘Stakes’, or the consequences associated with test results, have long been a part of the American scene,” and many states introduced schemes to develop minimum competency standards to reform schools and “ensure, in theory, that all students would learn at least the minimum needed to be a productive citizen.”2 During the same period, Australia became apprehensive that it might fall behind in the international competition from China, South Korea, and European nations such as Finland and Estonia.

The above quote on high-stakes testing indicates that these tests have controversial aspects, as will indeed become obvious despite the fact that some of the NAPLAN information is valuable and unique. We rely on several main sources in the analysis that follows:

  • Comments by academics
  • Comments by schools
  • Our own comments based on analysis of the cost-efficiency of the NAPLAN surveys and whether they tend to crowd out other approaches with the potential to improve Australian school education.

Summary of NAPLAN Approach

Since 2008, the Australian Curriculum, Assessment and Reporting Authority (ACARA), an independent statutory authority, has conducted annual tests in May of over one million students in years 3, 5, 7 and 9.3 The National Assessment Program—Literacy and Numeracy” (NAPLAN) covers reading, writing, language conventions (spelling, grammar and punctuation), and numeracy. The 2014 NAPLAN report states that NAPLAN provides “important information about whether young Australians are reaching important educational goals.”4

NAPLAN reports that the tests are adjusted statistically so that the 2014 results can be compared with previous years, and across geographic, demographic and educational groups. All students at the same year-level are assessed on the same test items. The tests were developed collaboratively by ACARA, the state and territory governments, the non-government school sectors and the Australian Government.

NAPLAN also reports that tests are designed to broadly reflect aspects of literacy and numeracy within the curriculum in all school jurisdictions. The test questions and test formats are chosen so that they are familiar to students and teachers across Australia. National Protocols for Test Administration ensure consistency in the administration of NAPLAN tests by all test administration authorities and schools across Australia. Statistical analysis is used throughout, focusing in the national report on the standard deviation test to indicate the variability in student performances.5

The introduction to the 2014 report concludes (p iv): “NAPLAN tests are the only Australian assessments that provide nationally comparable data on the performance of students in the vital areas of literacy and numeracy. This gives NAPLAN a unique role in providing robust data to inform and support improvements to teaching and learning practices in Australian schools.”

This conclusion is disputed because the statistical NAPLAN results are based on a very small part of the curriculum which bears little relation to the wide range of literacy and numeracy (and by extension, other subjects including the arts) which comprises the educational significance of school teaching. Any similarity between statistical and educational significance may be accidental, because the statistical significance is based on mathematical formulae associated with an isolated test, whereas educational significance is a value based on the whole wide area taught in school and difficult or impossible to express in numbers.

This paper covers two main areas. It shows selected NAPLAN results, concentrating on the two main classes of literacy and numeracy, and it critically assesses their educational significance. Appendix 1 provides a set of statistical background tables on reading and numeracy offered to those who want further insights into the time series that have been built from 2008 to 2014.

Key Results from the National Assessment

NAPLAN results are reported using five national achievement scales, one for each of the NAPLAN assessment domains of reading, writing, spelling, grammar and punctuation, and numeracy. However, our focus is on trends and the 2014 NAPLAN report only show the annual time series from 2008 to 2014 for the subject areas of reading and numeracy.6 Each assessment consists of ten bands, which represent the increasing complexity of the knowledge and skills assessed by NAPLAN from years 3 to 9. The stated intention with the NAPLAN reporting scales is that any given score represents the same level of achievement over time. For example, according to the NAPLAN report a score of 700 in reading in one year represents the same level of achievement in other testing years.

The bands are related to national standards in Table 1. Six of the 10 bands are used at each year-level of students, and are then related to a national minimum standard. A keyword search for “minimum standard” found plenty of examples in relation to the detailed statistical results in the 365-page 2014 NAPLAN report but no further definition or discussion of the origin of the concept or its validity.

The exact quote on page v is (emphasis added): “The national minimum standard is the agreed minimum acceptable standard of knowledge and skills without which a student will have difficulty making sufficient progress at school. Students whose results are in the lowest band for the year level have not achieved the national minimum standard for that year. These students are likely to need focused intervention and additional support to help them achieve the skills they require to progress in schooling.” We guess that the main criterion was concern for quality education but we couldn’t find a description of how agreement was reached, including the relative roles of ACARA and the other stakeholders listed from the NAPLAN report in the introductory section.

The lowest of the six bands at each year-level is deemed to be below the minimum standard and the highest band is the highest possible for a given student year (band 6 for year 3, band 8 for year 5, band 9 for year 7, and band 10 for year 10). The minimum standard, as defined, is achieved in the second band. Table 2 shows four indicators for each year level for reading and numeracy, respectively:

  • The minimum standard is reached each year by the vast majority of students (92-95%, slightly higher for numeracy than for reading).
  • The remaining 5-8% of students failed to reach the minimum standard in 2014 (there are minor variations between year-levels, with the lowest failure rates happening in year 7 for both reading and numeracy).
  • The proportion of students just qualifying for the minimum standard rises for both subjects. For reading the rise is continuous from 8.6% in year 3 to 16.6% in year 9. The rise is a little more modest for numeracy but still continuous from 10% in year 3 to 16.6% in year 9 (the latter proportion being the same for both subjects). In short, the proportion of students only just “making it” increases to one in six in year 9 from one in almost 12 for reading and one in 10 for numeracy in year 3.
  • The fourth indicator in Table 2 was included to focus on excellence rather than sufficient general achievement which dominates the NAPLAN analysis. Some of NAPLAN’s more interesting and valid results can be demonstrated by showing the proportion of students achieving the top score possible in their year-level (see next section). It shows clear differentiation in the many tests conducted by NAPLAN, between groups such as gender, Indigenous status, school location, and state and territory. Overall, the proportion of students in the highest band declines rapidly for reading from 24.5% in year 3, through 14.6% in year 5, 10.4% in year 7, to only 5.8% in year 9. This sort of decline needs not just an agreed definition but an explanation why.

In summary, the proportion of students with scores at or above the minimum standard is high with only 5-8% failing to reach that standard. But there is a significant difference between years in the proportion who are just at the minimum standard (rising) and the proportion who in terms of these criteria is classified excellent (declining). Most students fall in the three intermediate bands between “just making it” and “excellent”, and these percentages develop a little differently from one year level to the next. For reading it rises from 60.4% in year 3 through 67.1% in year 5 and 71.8% in year 7, and then declines to 69.7% in year 9. The equivalent residual for the three intermediate bands is more constant for numeracy, changing from 70.0% for year 3 to 71.2% for year 5, 69.1% for year 7 and 68.3% for year 9 (on average close to 70% with no apparent trend through the years).

The tables that follow describe the differences in the top scores among the various groups. Recognising the different patterns that came out of Table 2, our focus is on excellence because of its intrinsic interest as the differences that are revealed among the groups are among the most valid and useful that come out of the NAPLAN approach. 7

Top Band as an Indicator of Excellence

The four tables in this section relate to reading and numeracy skills rated in the top band in the NAPLAN tests, for each of years 3, 5, 7 and 9. Each state and territory is represented and show consistent patterns. Table 3 shows all students and the proportion of each gender in the top band, Table 4 the percentage of Indigenous and non-indigenous students in the top band, and Tables 5 and 6 the proportions in the top band according to “geolocation”: metropolitan, provincial, remote and very remote school locations — all students in Table 5 and Indigenous students in Table 6.

In Table 3, the primary observation is a dramatic decline in the proportion of students in the top band measured by NAPLAN’s reading test: from 24.5% of year 3 students through 14%, 10% and 5.8% in the subsequent year-levels. The pattern is different for numeracy, which falls from 14.6% in the top band of year 3 to 8.8% in year 5, but it then increases to 11.8% in year 7 before settling at 9.2% in year 9. Using measures like standard deviation liberally, ACARA evidently intends these statistics to be based on rigorous analysis, but the differences in these patterns are not explained in the commentary on the 2008-2014 time series on pages 300-302 of the NAPLAN report.

On the criteria set for the national assessment scale, agreed by ACARA and other stakeholders in the NAPLAN survey, and with the focus on excellence adopted in this section, the rapid deterioration of reading excellence as children progress through school is not explained, nor is the lack of match with numeracy excellence through the school years. Should there be a difference in excellence levels between literacy and numeracy? Wouldn’t that mean that year 9 school students are “better” at numeracy and maths than at reading and literacy? If so, is that carried over into years 10 and 12 and into tertiary education? Where is the evidence for that?

Excellence as such should be an important determinant, given the degree of competitiveness that is highlighted by the development of international comparisons (epitomised by the OECD Program for International Student Assessment, PISA), which has sent waves of apprehension through educational communities and appears to have been a factor in former Prime Minister Julia Gillard’s decision to introduce NAPLAN with the 2008 survey.8

Still concentrating on all students (gender is covered below), definite geographical patterns emerge. For reading, the Australian Capital Territory has the highest proportion in the top band in year 3 (33%), followed by Victoria and NSW, Tasmania, and then by Queensland, South and Western Australia, tailed by the Northern Territory. This pattern largely persists through year 5, but more senior years see a relative improvement in the ranking of NSW and Victoria falling back. Western Australia’s ranking improves consistently from years 5 to 9, perhaps assisted by the exceptional economic conditions favouring that state through the mineral boom. The Northern Territory remains behind in all years.

The rankings are somewhat different for numeracy. NSW actually pips the ACT at the post, and the two are then followed by Victoria, WA, Queensland, Tasmania, SA and the NT. One consistent pattern is SA being last among the states. Queensland and Tasmania rank fifth and sixth, respectively. WA comes across better in fourth position, Victoria as number three, the ACT second, and NSW topping the eight states and territories on excellence in numeracy.

The above description should not be taken out of context. In general, the two subject areas have similar patterns. After combining them, NSW and ACT vie for first position followed by Victoria in third and WA in fourth position. Tasmania and Queensland are toss-ups for sixth, SA is seventh, and the Northern Territory is in last position.

These statistics give an approximate picture of the opportunities that exist for school students across Australia, but many things are left out including the distribution of schools by jurisdiction (government, Catholic, independent), and distances and relative importance of metropolitan, provincial and remote school locations. The position of Indigenous students, unhappily, is another significant factor. Finally, ranking itself is a normative or qualitative rather than exact measure. Nevertheless, the scores by excellency as measured by the highest national assessment band at each year-level appears highly relevant given the Australian sense that, rightly or wrongly, we are losing out to other countries on educational quality. Excellence, as distinct from general quality, is apparently neglected in the NAPLAN report. It should be recognised as another highly significant indicator.

This geographically related detail is important in an overview, but other differences can be more briefly described in the rest of this section. More details can be gleaned, as needed, by anyone delving into the tables. Table 3, however, contains the results for each gender at each of the four year-levels. For reading, the percentage in the top band falls consistently for both genders; for numeracy less so.

In reading, girls do consistently better at the “excellent” level than boys, from year 3 through 5, 7 and 9. The opposite applies to numeracy, where a higher proportion of boys reach the highest level of student achievement. As already established, however, the percentage in the highest band for both genders is better maintained across the year-levels in numeracy than in reading.

Table 4 contrasts the proportion of Indigenous and other students reaching the highest band of NAPLAN’s national assessment scale. On literacy, more than one-quarter of non-indigenous children in year 3 doing the 2014 test were rated on this criterion of excellence, compared with only 5% of Indigenous students. By year 9, the excellence ratio had fallen to 0.6% for Indigenous students — one in about 170 students, compared with one in 16 mainstream students. We have already queried the rapid decline in reading excellence computed by the national assessment scale.

The numeracy results are not much better, though they don’t show the same degree of deterioration as the year-level increases. Numeracy is consistently rated lower than literacy in the earlier school years but catches up in years 7 and 9. Still, only 0.8% of Indigenous students in year 9 were rated excellent on numeracy in 2014, equivalent to about one in 125 Indigenous students in the top band, compared with almost one in 10 non-indigenous students.

There are significant geographic differences here, of which some might be expected considering the different Indigenous population patterns. For reading, Tasmania tops the Indigenous scale in year 3 (10.9% in the top band), followed by Victoria, the ACT and NSW. WA and the Northern Territory are at the bottom, both areas with relatively large Indigenous populations.9 This does not explain why the percentages are lower for Indigenous people counted on their own, which it is more likely to be a function of the relative importance of metropolitan versus remote school locations.

Whatever the reason, the underlying issue of discrimination should be addressed urgently, because the year 9 ratios sink to abysmal levels anywhere in Australia, though the same areas keep doing relatively better: Tasmania with 1.7% of Indigenous students followed by the ACT (1.2%), NSW and Victoria (0.7%). WA again trails the states (0.4%), and the NT clocks in at a minuscule 0.1% — equivalent to about one in 1,000 Indigenous students sitting the test.

The excellence patterns, while lower in the early years, are similar for numeracy. In year 3, Tasmania tops with 4.7%, followed by the ACT, Victoria and NSW. In year 9, the maximum ratio for Indigenous students was 1.3% (NSW), followed by Tasmania and Victoria. The Northern Territory supplied the only nil ratio found in Table 4, which in this case means less than 0.5%.

Summarising Table 4, non-indigenous students are about five times as likely as Indigenous students to reach the top band for reading in year 3, and 10 times as likely to do so in year 9 (within its lower overall ratios discussed above). For numeracy, non-indigenous students are some 10-12 times as likely to be in the top band in years 5, 7 and 9.

The impact of school location in 2014 is examined in Table 5 (all students) and Table 6 (Indigenous students). These tables cover years 3 and 9 only, but figures for the two intervening years are available as well.10

The evidence is clear. For reading in year 3, 26.8% reached the highest band in metropolitan schools compared with 19.1% in provincial schools, 13.3% in schools classified as remote, and 5.8% in “very remote” schools. The same pattern at much lower levels applies to year 9 students: 6.8% for metropolitan, 3.3% for provincial, 1.9% for remote and 0.7% for very remote schools. For numeracy, the ratios in year 3 going from metropolitan to very remote declined from 16.4% through 10.3%, 6.6% to 3%. The numeracy ratio by year 9 also declined seriously, ranging from 10.9% in the top band in metropolitan schools through 4.6% in provincial, 2.3% for remote and 0.6% for very remote schools.

The year 3 findings on the top band for reading/literacy was highest in the ACT followed by Tasmania, Victoria and NSW, with a similar pattern for provincial schools (not represented in the ACT). Victorian remote schools showed up most favourably with as many as 20% in the top band compared with a national average of 13.3%. It stands out as well in the numeracy part of Table 5. This may be a function of Victoria being a relatively compact area, coupled with a relatively advanced education system.

In summary, a metropolitan location is heavily advantaged (presumably even more concentrated towards central metropolitan areas). Remote and very remote are heavily up against the odds. The state and territory governments are of course highly conscious of these inequalities but it is a major problem to fix them, not just for the teaching of Indigenous students but all students.

Table 6 concludes this section. It is structured in the same way as Table 5 but covers Indigenous students only. Again, metropolitan schools lead year 3, with 7.5% of students in the top band (exceeded in Victoria, Tasmania, NSW and the ACT). The proportion of Indigenous students in the top band in provincial schools falls to 5.2%, with Tasmania leading. This state also scores relatively highly in “remote” schools, way above the Australian average — presumably because there are few schools in this category in the island state which also has relatively more Indigenous people than any other state, and one or a very small number of schools happen to cater well for Indigenous children.

The the top band for reading at year 9 declines drastically with only two observations above 1% (Hobart, Tas and Canberra, ACT).

The observations for numeracy are generally lower in year 3, ranging from 3.7% in metropolitan schools to a mere 0.2% (one in 500) in very remote schools. By year 9, only metropolitan schools average more than 1% of Indigenous students in the top band (assisted by Hobart and Sydney, and to a lesser extent Melbourne and Perth). All provincial and remote schools in year 9 scored less than 1% in the top band in year 9.

The subject matter in this section is based on the statistical work presented in Appendix 1. NAPLAN calculates the achievements of all students tested in the survey to reach the highest level in Year 9 as a function of its literacy and numeracy national assessment scale, based on 10 “bands” of which bands 1 to 6 are used in year 3, 3 to 8 in year 5, 4 to 9 in year 7, and 5 to 10 in year 9 (as described in the section on “Key Results from the National Assessment”). Appendix Table A1 shows that the annual variations between 2008 and 2014 were slight, expressed by the standard deviation (SD) which is a tiny proportion of the average in any year including years 3 and 5 in reading which show the highest SDs. Another expression of the low variability between years is the median observation, varying between 99.89% and 100.40% of the annual average for reading, and between 99.85% and 100.40% for numeracy. This median is close to one (100%) if the trend is flat.11

The 2008-14 trends look flat for all combinations of year and subject matter (Charts 1 and 2). This suggests that subjecting more than a million students every year to these tests — largely unrelated to the main literacy and numeracy curricula and eating into the time available for teach these and other subject groups — would be very difficult to justify in benefit-cost terms, not to mention the added stress on students, parents and teachers revealed by the many critical comments which cannot be readily measured as financial costs. A triennial NAPLAN test would be sufficient to provide virtually the same information. Moreover, what is now essentially a census of students conducted by NAPLAN might be replaceable by a large, well-designed survey — using methodology already in place elsewhere, including the Australian Bureau of Statistics.

Some minor apparent trends may be detected. Chart 1 shows overall reading achievement for each school year-level for each year from 2008 to 2014, and a dotted line showing the average across the seven years. For year 9 the dotted line is horizontal (no trend), and the apparent trends for the lower year-levels are much reduced or would disappear when 2008 is eliminated from the analysis. Arguably the first NAPLAN survey may not have been quite as perfectly executed in a technical sense as subsequent surveys, despite the NAPLAN report declaring all years fully compatible.

Trends are practically non-existing for numeracy (Chart 2), including year 5 despite a relatively low observation for 2008. There may be reasons for the apparent differences between the reading and numeracy scores that have been calculated, but any differences remain minor and the idea that an expensive annual analysis is needed to capture such differences has to be challenged. If the trend is flat it is also quite predictable, and there is little point in confirming this annually.

Table 7 shows a statistic called “the nature of the difference”, which represents an attempt to strengthen the standard measure of statistical standard deviation. According to NAPLAN 2014 (page iv), it was introduced in the 2014 comparison calculations to help interpret differences in results. However, NAPLAN’s adoption of this measure is based on a misconception.

The usual assumption in statistical analysis is that differences between two groups are due to chance, unless there is a rationale for why such differences might occur. This is known as the null hypothesis (H0) to verify or reject statistical assumptions. It is assumed that sample observations result purely from chance rather than being influenced by some non-random cause.

“Non-random” implies that the observations come from different “populations”, a term referring to the group from which the sample is drawn. In simple terms, the alternative hypothesis (H1, also known as the research hypothesis) predicts a difference between the base and current populations associated, say, by a scientific experiment. If the difference that can be related to the experiment is demonstrated by a statistically significant change compared to the situation prior to the test, the alternative hypothesis that H1 differs from H0 is supported. If not, the null hypothesis wins out, at least until further experimental work suggests otherwise.

The NAPLAN report doesn’t explain and justify why it applies an H1 hypothesis rather than H0. It just superimposes another measure, “effect size”, on the conventional statistical test based on the null hypothesis. Considering the modest slope of any trend and the tendency for these trends to disappear across the seven years according to Charts 1 and 2, the added measure called “nature of the difference” is unlikely to be highly relevant, as well as actually conflicting with scientific method in rejecting the null hypothesis without providing a rationale for doing so. The justification for dropping the null hypothesis is further eroded vis-à-vis the cost and inconvenience of schools conducting annual full-size NAPLAN tests. See Appendix 1 for supplementary remarks.

Cost of Annual Survey

The annual cost of running NAPLAN has been consistently quoted as more than $100m by numerous people criticising the tests but the actual cost analysis is hard to find. The most concrete evidence we have detected is in an article by Bethany Hiatt, education editor of The West Australian newspaper, ‘Tests lashed as $100m waste’ dated 15 May 2012, when the NAPLAN tests were launched for that year. It begins as follows:

“Nearly 100 Australian university academics have signed a letter criticising national literacy and numeracy tests that start today as having “little merit”.”

“As a group we are appalled at the way in which the Commonwealth Government has moved to a high-stakes testing regime … despite international evidence that such approaches do not improve children’s learning outcomes,” the letter says. “These tests have little merit given that they focus on assessment of learning rather than assessment for learning and they are being misused for … political agendas.”

The academics, who signed to support a “Say No to NAPLAN” campaign discussed in a subsequent section (“Other Comments”), said the tests cost $100 million a year, money that could help children with learning difficulties. This amount includes the effort schools put into preparing students for the survey with the disturbances allegedly caused by the diversion or resources away from the main curricula.

It is important to check whether there are any plausible cost calculations apart from the $100m estimate. The only alternative numbers we have are from a Senate inquiry in preparation for the 2013-14 Federal Budget estimates in a document responding to three questions on notice:12

  1. What is the total cost of developing, administering and reporting on NAPLAN to the Commonwealth Government. (Response: The Australian Government contributed $12.175 million to ACARA in the 2011-12 financial year to manage the national aspects of NAPLAN testing.)
  2. In relation to NAPLAN, what costs are borne by state and territory governments? (Response: ACARA is funded 50% by the Australian Government and 50% by the states and territories of Australia.)
  3. What is the cost of operating the My School website, in each year from 2010-11 to the current financial year; and estimated over the forward estimates? (Response: 2010-11 $1,577,907, 2011-12 $747,000, 2012-13 $705,000, and 2013-14 $726,000. The reduction in 2011-12 partly reflects savings achieved through bringing website development and maintenance in-house in 2011.)

These estimates add to about $25m — a far cry from $100m-plus which was the estimate according to the nearly 100 academics who signed the letter citing the NAPLAN tests for having little merit. Unless education authorities increased the school budgets immediately it implies that three-quarters of the total cost is carried largely by the schools. This “cost” is difficult to establish in the absence of details on the survey that resulted in the $100m-plus estimate.

Bethany Hiatt of The West Australian highlighted the cost issue by quoting a respected private school principal who said that the NAPLAN literacy and numeracy tests cost $45 for each student and the money could be better spent on other educational needs. In a strongly worded note to parents, the former chair of the Association of Heads of Independent Schools of Australia WA branch and Tranby College principal Jo Bednall said the NAPLAN tests were a waste of money.13

She said Tranby spent $14,567 on the tests “last year”, “which is more than a little frustrating when the school doesn’t have a choice about whether to participate or not”.

There are plenty of comments from educationalists and others, in fact we have rarely seen so much criticism of a public-sector Australian project, which is saying a lot. We are concluding that little useful knowledge would be lost by cutting the NAPLAN surveys to a triennial basis, but it would be beneficial also to review the basic assumptions behind conducting the survey by harmonising it with the broader principles of educating Australian school students.

As noted above, hard evidence of the cost to schools to account for the $100m or more is difficult to find, but the academic evidence is hardly made out of thin air. A senior lecturer in education at Murdoch University, Perth, WA, Greg Thompson, was awarded a $375,000 three-year research project in February 2012 to investigate the effects of NAPLAN on school communities. A paper co-written with Allen G. Harbaugh reports on a preliminary survey of teachers in WA and SA in 2012 generally expressing concern about the impact on pedagogy and curriculum. The paper still did not quantify the associated costs but concentrated on gathering evidence on teacher concerns.

In conclusion, the $100m cost estimate looks credible, but it would be good to have it verified, and we would appreciate hearing from anyone who has evidence to do so.

Our Own Critique to This Point

  1. Taking the most positive feature first, NAPLAN provides some unique results of which the identification of achievement scales is the most important: differences classified by gender, state and territory, Indigenous status, geolocation (distribution of schools within each state and territory), and some other variables such as language background other than English and parental education and occupation, which were left out of this analysis. All these features are strongly interrelated, capable of analytic treatment, and very important.
  2. The key issue is the NAPLAN survey itself with its concentration on a set of isolated multiple-choice questions rather than the basic educational programs in schools, at an exorbitant cost. The current survey design and execution is at odds with the goals schools and education authorities set themselves for the best ways of rearing excellence as well as developing human resources to fill Australia’s future employment requirements.
  3. Putting NAPLAN on a triennial basis would provide major savings which could be put to use elsewhere in the education sector or in other culturally related areas, and few if any disadvantages. The format would be retained unless it proves acceptable to replace NAPLAN with a major, well-designed survey, using Australian Bureau of Statistics guidelines and seeking its advice. It should be a precondition that the time series built since 2008 is preserved.
  4. The achievement scale developed for NAPLAN is a useful tool, but it needs to be associated with something less formidable than the current high-stakes tests which adversely affect the attitudes of students, teachers and parents. Stepping back, philosophically, from a preoccupation with “Australia losing ground internationally” may be a good start, especially if it can be replaced, or at least supplemented, with a stronger belief that Australia has something unique to offer the world. This would necessitate a change in government priorities to incorporate positive cultural planning and policy-making. What better way to launch this than by exposing Australia’s schools to this change?
  5. It is inferred in the previous point that the NAPLAN surveys concentrate unduly on “sufficient general achievement”, rather than promoting the “excellence” they measure by the top bands of the achievement scales. Both are needed for Australia to retain its competitiveness, just like the tertiary education system must be geared to produce eminent scientists, artists and other professionals as well as filling general employment requirements as well as possible.
  6. Flaws in the NAPLAN surveys revealed by the analysis of excellence (as measured by the top band of the achievement scales which provides vital unique data) include inconsistencies like apparent differences in excellence between literacy and numeracy in the senior school years, and missing opportunities associated with fundamental issues of discrimination, centred on Indigenous students but applicable to other issues such as the location of schools within a state or territory. These flaws need fixing, or at least explaining.
  7. Focusing on literacy and numeracy leaves essential cultural factors outside the scope of what constitutes a rounded education allegedly essential for future economic prospects. Preserving Australia’s cultural capital and helping it prosper nationally and internationally is a major requirement and governments at all levels should give it much higher priority.

Other Comments

Appendix 2 reviews two academic views, by Dr Justin Coulson and Professor John Polesel (the latter heading a literature review of what they call NAPLAN’s “annual bureaucratic extravagance”). Coulson discusses the NAPLAN tests that were launched in schools in May 2015 (results not yet published). Both concentrate on the annual $100m cost of the survey; both are highly critical and no further comments are needed here.

Appendix 3 (“And One from the Coalface”) is part of a Senate submission from a primary school teacher in Canberra, Remana Dearden. It concerns the difficulties schools have in explaining and administering the NAPLAN survey, and its irrelevance to teachers. Please turn to it both for her comments and the cartoon she included which seems to tell all that is wrong with the NAPLAN approach. Rightly or wrongly attributing the advice to Einstein, “our education system” demands that we all have to learn to climb a tree whether we are a monkey, fish, dog, bird or elephant — or we will live our lives believing that we are stupid.

Appendix 4 summarises comments to a Life Matters program on ABC Radio National, dated 19 May 2015. It is background only but illustrates many aspects which bear on the subject of this article but largely defy formal analysis. It is another set of signals from the coalface.

We continue with the presentation of selected sets of views intended to be representative. There are many more in the Internet. Like the Life Matters program, the first two comment on the 2015 tests.

Christopher Bantick14 teaches English literature at an Anglican grammar school for boys in Melbourne. His comments relate to ACARA’s reported decision to replace the paper-and-pencil approach with computerised tests from 2016. He notes that the students’ fear of the tests causes school principals to recommend that students are tutored for the NAPLAN tests to limit this fear. “My own students have been prepared for the NAPLAN tests this week; I have given them practice NAPLAN questions and assessed them. I expect the class to not be intimidated by the NAPLAN experience.”

“Where NAPLAN testing fails is that it is a Neanderthal blunt club of a tool to determine progress. … The essential point of NAPLAN is to identify schools, owing to their results over time, which may need special assistance and support with the establishment of skills. … As a secondary-school English teacher, my concern with NAPLAN testing is that it does not measure development. What about the boy who sits in the back row of my class who is struggling with his writing and reading? His development on a NAPLAN test will seem negligible but I know he has made significant progress. He, and many like him nationally, are understandably nervous about NAPLAN. Their individual academic growth will not be measured. Their confidence will not be measured and they will be clustered as a mere unit of a statistical number-crunched graph.”

“Let there be no mistake, computer marking would lessen my workload instantly. It would also give me printouts of neat data. But it would not allow me to see the germination of insight and understanding, or the surprise of something beautiful written from a tender heart that has taken courage to pen. That’s what NAPLAN can’t assess. Creativity can’t be number crunched, and computers don’t get it.”

Sydney education reporter Eryk Banshaw noted: “Psychologists across the country are still witnessing high-levels of stress in young children in the lead up to the NAPLAN exams despite repeated warnings that precautions need to be taken with students in high-pressure environments. As more than a million school children prepare to sit the exams on Tuesday, students as young as eight are getting so anxious they are vomiting.”15

“NAPLAN chiefs are urging teachers and parents to help calm down children across the country while psychologists continue to report increased levels of illness, sleeplessness and school avoidance. The warnings have followed years of principals voicing their concern that the burden of NAPLAN is harder on their youngest students, with those in year three confronting an entirely new format of testing at such a young age.”

NAPLAN’s chief administrator, Dr Stanley Rabinowitz, urged parents and teachers to “control the stress”. “Treat it as a normal day and move on,” he said.

The Banshaw article includes 10 multiple-choice questions from NAPLAN which can be viewed by clicking the website and scrolling to the end of it.

The “Say No to NAPLAN” campaign was formed by a group of Australian academics teaching in universities. “As a group we are appalled at the way in which the Commonwealth government has moved to a high stakes testing regime in the form of NAPLAN, despite international evidence that such approaches do not improve children’s learning outcomes” (David Hornsby, 16 December 2013). At the end of September 2015 the list of academic signatories to the letter of support to say no to NAPLAN had grown to 129 (counted at the website).

The 20 two-page papers supporting the campaign (mostly dated 2012) are listed here. They deal with a range of aspects and interested parties should take a personal look. We refer briefly to a few of the papers dealing with NAPLAN issues generally, plus two papers in the next section which are among those advocating a greater role for music and other arts in school education.

The first paper in the collection is ‘Inappropriate Uses of NAPLAN Results’ by Margaret Wu and David Hornsby. It reinforces the general criticism which our own analysis generates, and is representative of the general thrust of the “say no” campaign:

  • With around 40 test questions per test, NAPLAN only measures fragments of student achievement.
  • Bureaucrats may refer to the achievement between students (and between schools) but what they mean is a test score gap. Since the test assesses very limited aspects of learning, the results cannot be used to make claims across overall achievement.
  • Student achievement should not be narrowly confined to achievement in numeracy and literacy only. Creativity, critical thinking and resilience are skills that NAPLAN doesn’t assess. In contrast, teachers do know about students’ wider abilities beyond numeracy and literacy.
  • Short tests cannot be used to rank student achievement because a student’s results in short tests can vary widely.
  • For assessments to be relevant to teaching and learning, what is being assessed should match what is being taught. But some states have not yet adopted the new Australian curriculum. It will be a long time before NAPLAN truly matches what is taught.
  • The NAPLAN tests are not diagnostic tests but standardised tests designed to assess and compare the overall achievements of very large groups, not individual students or schools.
  • Test scores cannot tell us whether a teacher or school is good or bad because many other factors influence the scores, such as poverty, parental support, aspirations, motivations and peer pressure.

The second paper, again by Hornsby and Wu, is “Misleading everyone with statistics”. All scores have a margin of error, which the NAPLAN reports themselves show to be large. The 95% confidence interval may surround a score of, say, 488 with a standard error of ±54 so the “true” score may be as low as 434 or as high as 542 within the 95% interval.

The tests are, as the first paper showed, not diagnostic and don’t provide the kind of information required to inform teaching programs. Furthermore, the margin of error in those tests is so high that students may appear erroneously to be performing worse in successive tests, that is, being in different achievement bands solely due to statistical uncertainty.

Role of Music and Other Arts

ABC Radio National’s morning program Life Matters on 5 September 2015 broadcast “Music education, more important than you think”. It was presented by Natasha Mitchell with Isabelle Summerson as producer. The initial paragraph on the website sets the stage, asking for listeners to call the program with their experiences:

“When you think about what classes are most important to children’s formal education, what comes to mind? Is it maths? English? Maybe science or history? Chances are you probably didn’t think of music, but maybe we need to rethink the value we place on music class. Maybe it’s not just a bludge lesson but critically important to children’s development of analytical skills and other areas of learning. Internationally renowned conductor and music educator, Richard Gill, believes music should be at the top of the food chain in children’s schooling (our emphasis). Agree? What are your memories of music class at school? Did you get a musical education in or out of school? How do you feel it influenced your relationship with music today?”

The three guests at the program covered a comprehensive range of backgrounds. Richard Gill as already noted is conductor and music director of the Victorian Opera Company, advocating the view that the school curriculum should be book-ended by music at one end and physical activities at the other. Margie Moore is known to this Knowledge Base in connection with the outback NSW festival Moorambilla Voices and listed in the program as arts and music consultant with Music Australia’s Music. Count Us In.. The trio of guests was completed by Randy Glazer, senior music facilitator at Street University, Mt Druitt in Sydney’s west, who works with young people and others outside and after school.

Music and the arts appear to have had something of a renaissance among commentators in Australia which may be partly associated with the negative reputation acquired by NAPLAN. At least we didn’t expect two of the 20 two-page papers in the Say No to NAPLAN campaign to be specifically advocating music and the arts. Much more is needed, of course, for the arts and cultural policy to be properly recognised.

Robyn Ewing is Professor of Teacher Education and the Arts at the University of Sydney. Her contribution to the “Say No to NAPLAN” campaign is ‘The risks of NAPLAN for the Arts in Education.’

“The Arts are as old as human civilization and they enrich our lives in a myriad of ways. Quality arts experiences can and should have a profound experience on children’s lives and life chances and therefore should be an important part of the school curriculum.”

“Over the last fifteen years a succession of international research reports have clearly demonstrated that children who engage in quality arts processes, activities and experiences achieve better academically; … “

“Yet with increasing emphasis on high stakes testing such as NAPLAN in Australian schools, the Arts will continue to be relegated to the margins of the mandated curriculum. Those subjects that … can be measured by multiple choice testing will be given increasing priority. Arts, poetry, creative writing, music-making, aesthetic appreciation and dramatic performance cannot easily be graded after a thirty to forty minute test.”

“The kind of engagement with ideas and processes inherent in all Arts disciplines (reading, dance, drama, literature, media arts, music and visual arts) helps develop children’s already rich imagination and creativity.”

“It is the arts processes and the making or creating rather than the final outcome … that is the most important learning because that making process will inform the next one and provide opportunities to extend and amplify understandings.”

Conductor and music director of the Victorian State Opera Richard Gill is hard-hitting in ‘Wake up Australia, or we’ll have a nation of unimaginative robots’ (2011).

“I want to make my stance very clear from the outset: NAPLAN and My School have NOTHING absolutely NOTHING to do with the education of a child. This abhorrent and insidious method of assessing children, teachers and their schools needs to stop now.”

“[They] will be subjected to a style of teaching which is directed exclusively to producing satisfactory results in the NAPLAN tests and consequently scoring high ratings with My School.”

“Screaming the words literacy and numeracy from Canberra does not constitute having an educational policy. In fact the race to become the most literate and numerate schools with the best rankings nationally is exacting a terrible price.”

“Evidence is now available that schools all over the country are cutting back on arts education to devote more time to subjects which will make children literate.”

“Activities used in teaching NAPLAN tests destroy individuality, stifle creativity, stultify thought… . The very things that promote literacy and numeracy are the arts, beginning with serious arts education in the early years. If we want a creative nation, an imaginative nation, a thinking nation and a nation of individuals, then we must increase the time for arts education especially music education. If we want a nation of non-imaginative robots who can do NAPLAN tests then we are well on the way to achieving that condition.”

“Music … requires the student to have a capacity to work in the abstract, an ability to work across several skill areas simultaneously and the ability to rationalise this verbally. Children’s involvement in musical activity has a profound effect on the development of a child’s general learning.”

“Wake up Australia before it’s too late.”

Music Australia’s senior writer and editor of its Music Journal Graham Strahle wrote on 14 August 2014:

“One of the continuing concerns about NAPLAN is that it takes teachers’ attention away from subjects that it does not test. That includes music, along with other arts subjects, foreign languages and history. The consequence is not just that children may be under-achieving in these subjects but that their whole development may be negatively impacted.”

Like the present paper, Strahle quotes Greg Thompson, Allen G. Harbaugh, Nicky Dulfer, Justin Coulson and Richard Gill for promoting arts education as against NAPLAN’s negative impact on music and creativity. Music Australia chief Chris Bowen on 30 September 2015 welcomed the incoming communications and arts minister in the Turnbull cabinet, Mitch Fifield, agreeing with Joanna Mendelssohn’s welcoming the shift from “a very patrician approach to the arts as items to be consumed by the professions in their leisure hours” during Senator George Brandis’s reign in the portfolio. Mendelssohn continued (20 September):

“In his communications ministry Fifield will have to deal with the NBN and the digital revolution, and this fits in very well with some of the concerns of different areas of the arts. … I only hope Brandis’ Program for Excellence in the Arts is quietly abandoned and due process is restored.”

The omens seem to a better future for lobbying against NAPLAN in favour of a greater role for arts education, in favour of a stronger cultural policy in Australia.

Hopes for an Australian Cultural Policy

This Knowledge Base has for several years pointed to the potential that culture has for an economy. Recognition of our cultural assets and the need to protect and enrich them is part of a lecture we gave as early as 2007. It demonstrated a close parallel with another vulnerable type of assets, ecological capital because they both such large non-renewable elements. But Australia’s cultural and environmental assets are also rich and, given proper high-priority support, robust.

The talk in 2007 has passed the test of time in one essential respect, as shown by this definition: “Cultural capital is the sum total of a country’s tangible and intangible cultural assets not already counted as other forms of capital. Culture is here defined widely not just to include museums and concert halls, music and the arts, but also the ambience which makes up the cohesive power of a society, its traditions and norms.”16

Our current set of music scenarios for the next two decades outlines the big difference in prospects between nurturing our rich cultural capital and government policy largely ignoring it. Professor Julianne Schultz is a leading observer of cultural policy in Australia. Her 2015 address Comparative Advantage. Culture, Citizenship and Soft Power testifies to that and is strongly recommended reading. Her views are very similar to our own given that resulted from different development paths.

Julianne Schultz notes that politicians often equate cultural policy with the arts. “A more sophisticated way of framing this builds the links between the creation of art of intrinsic value, and the commercialisation of related products and services – rather than considering it as a binary option.”

While she was advising the Australian government during its attempt to replace the 1994 Creative Nation cultural plan in 2013, “considerable effort went into ways of defining the activity that could derive from investment in artists. The best analogy was to equate this to the investment in pure scientific research. It may have a commercial and instrumental value, but the research itself is of singular importance. In Australia this debate has been stymied by equating culture with arts defined quite narrowly as the non-commercial sector.”

We totally agree with Julianne Schultz that there are more efficient ways of “meeting the competition” from other countries than those pursued in general trade policy, currently in the doldrums partly because of the collapse of the mineral boom. They include assessing Australia’s strengths in the cultural area, and how to benefit from it. Schultz points out that the cultural sector keeps growing strongly globally, in contrast to commodity trade. Australia is well placed by its geographic location and its own long cultural traditions in a variety of forms. But culture is treated as a political stepchild where a unified portfolio would aim at bringing together the various components to reinforce each other:

“In 2009 UNESCO devised and adopted a statistical framework that was designed to capture the scale of activities and by providing an agreed international definition, make comparisons, and assessments of success more robust. The framework takes the major areas of cultural activity and divides them into six broad cognate groups and two related domains: heritage (which includes archaeological, physical, environmental, structural and intangible dimensions), performance (theatre, music, festivals), visual arts (from fine art to photography), audio-visual (film, tv, video), publishing (books, newspapers, magazines, libraries), design (fashion, architecture, graphic design, advertising), tourism, sport.”17

Relative to NAPLAN in the current context, culture including arts education is crowded out — the $100m annual cost leaves little space for other possibilities, though it should if the widespread critique of the scheme is to have any effect. Again, policy-makers may have to look more widely, taking in science and technology which do not explicitly enter the Schultz framework in the previous paragraph.

The legitimate question is, how good or bad are we really when it comes to science and technology? We have a world-class organisation called the CSIRO18 and fine universities everywhere. The issues confronting us are very big, all with cultural overtones: some associated with the prevailing trends towards inequality, others with a lack of political understanding that economics depends on natural and cultural ecology and not just on the “classical” inputs of machines and humans, to take but two. Our largest ally, the United States, have similar apparent weaknesses but they are also the world’s most advanced community when it comes to tertiary research.

Australia and America certainly cannot afford to be complacent — both need to think laterally and not just be hung up by matters like the international PISA tests which seem to have scared our educators and their governmental masters so badly. Contributions like Julianne Schultz’s would help immensely if those on the hill would only listen. Upgrading the abysmally low priority given to cultural policy would be an essentially first step.

From an arts-related perspective, cutting back on the annual $100m cost of conducting NAPLAN would leave more than ample resources to devote to the stepchild in Australian statistical research, music and the other arts which are funded by a fraction of NAPLAN. It would enable Australia to rid itself of the overtone of fear that Australia is lagging behind other countries in education quality, and start concentrating on our strengths to conduct an internationally adapted cultural policy incorporating all its aspects and as Schultz recommends unifying the portfolio in a common promotional thrust.

The main hope is that the bad days characterised by the Commonwealth budgets in 2015 and 2016 are coming to an end. They saw our main funding organisation, the Australia Council, being drastically cut back (especially the most creative activities showing new ways for the arts to develop), though the new Turnbull government still have to be tested on this.

Conclusion

This paper goes to further lengths than most other topics covered in the Knowledge Base because of the complexities and deep consequences associated with the issues. To ease reading but retaining the case for change, much of the material has been relegated to footnotes, appendices and background commentary. The key narrative in the body of the report is all considered necessary for understanding how and why the topic has become so dominant, and why it is particularly important for arts-related school education including music.

The following dot points represent the synthesis. See “Our Own Critique to This Point” for a more elaborate version.

  1. High-stakes tests like NAPLAN put undue pressure on schools and teachers which is transmitted to students and parents
  2. Key issue: The plausible annual cost of NAPLAN is $100m and is based on a very small part of the curriculum which diverts schools from their true educational mission
  3. Results are similar from year to year suggesting that the annual tests could be replaced by triennial tests or a well-designed sample survey
  4. This could incorporate the most valuable NAPLAN test which is the identification of an excellence measure that provide important data on gender, Indigenous status, state/territory, and school location
  5. Music and the other arts are particularly affected whereas they should be spearheading a massive change in government policy towards international cultural trade.
  6. Suggestions on how to use the money saved by cutting down on NAPLAN are bound to be numerous. To secure a stake in the process music, and the cultural sector generally, should start preparing for the change now.

Appendix 1: NAPLAN 2014 Statistics

These statistics were compiled in support of the analysis presented in the body of the report. The basic structure is determined by the achievement data for each year-level, for reading and numeracy respectively. We designed eight worksheets based on the NAPLAN 2014 data: “R Year 9” (and 7, 5 and 3) and “N Year 9” (and 7, 5 and 3).

The structure of each worksheet is identical. The basic top left-half section shows annual statistics 2008-14 for all students, and by gender, Indigenous status and state/territory. These are averaged in the last column of this section, printed in red. The analysis of seven years of data covers each school year in the annual NAPLAN surveys. Please click the pdf icon to show four tables covering reading and four covering numeracy, each on a separate page. Software such as Adobe Reader is available commercially to convert the file back to Excel if so required. Alternatively we can email the Excel file (NAPLAN analysis.xlsx) to interested readers.

The gender difference is based on the arithmetic averages of male and female achievements, because we are interested in the absolute differences in the distributions and therefore treat each gender as a separate population. The gender analysis is based on the number of male achievements as a ratio of total achievements.

The Indigenous achievements are shown as the ratio to non-Indigenous achievements, again based on arithmetic averages for the two main groups.19

The state and territory findings are related to the arithmetic average for all eight states and territories in the bottom part of the worksheet. The columns “relative to average” (printed in green in the right-hand part of each worksheet) provide the basis for deriving conventional standard deviations and medians for each statistical series (printed in red at the extreme right). The analysis is summarised in the following four small tables.

Table A1 shows average achievement for reading and numeracy separately, for all students for each of the seven survey years, and year-level. Averages, standard deviations and medians are derived from this. The standard deviation measure of statistical variability (SD) is generally tiny but it does indicate what might be interpreted as trends as shown by the larger SDs for years 5 and 3 in reading and year 3 in numeracy. NAPLAN’s use of “nature of the difference” in Table 7 of the main report attempts to take this further, but the concept is not justified and conflicts with scientific method. The median observations in Table A1 also remain close to each average, ranging from 99.85% for year 9 numeracy to 100.40% for both reading and numeracy in year 3.20

The “nature of the difference” concept adopted by NAPLAN (incorrectly because the initial and current years are part of the same group (“population”) according to the basically accepted null hypothesis) distinguishes between five measures of statistical significance of change in average achievement between a base and current year (2008 and 2014 here). The current average achievement can be “substantially above” or just moderately “above” the initial finding, or it can be close to or not statistically significant, or it can be “substantially below” or “below”. In Table 7, no results are substantially below or above the base year, nor are any “substantially above”. Of the 52 criteria measured in Table 7, 33 showed no statistically significant changes for reading, but 19 moved upwards to a moderate extent. For numeracy, as many as 46 changes were not statistically significant leaving only six with moderate increases.

Any trend judging from this is upward. Most of the findings judged statistically significant from the “nature of the difference” tests relate to the geographical distributions, concentrating on Queensland, WA, Tasmania and the ACT for both literacy and numeracy. Significance was also attributed to reading in years 3 and 5 for the total sample but not extending to years 7 and 9, and applied to each gender and Indigenous status. It might be argued that these categories are plausibly leading some modest growth in average achievement — to some extent representing a catch-up with other states and territories and with non-indigenous people still having a long way to go. The argument fails if the “nature of the difference” test is statistically invalid.

The largest conventional SD findings for reading at year 3 applied to Queensland and the NT, and to Indigenous people. Probably more significantly, all SDs were substantially reduced for reading as year-levels progressed, and more modestly for numeracy. Crude average SDs of the Appendix 1 worksheet for reading was .019 in year 3, .017 in year 5, .008 in year 7, and .006 in year 9. The comparable findings for numeracy were .011 in both years 3 and 5, and .007 in years 5 and 7.

The “null” hypothesis discussed in the initial discussion headed “Flat Annual Trends” is generally assumed to be the appropriate one until evidence indicates otherwise. The usual null hypothesis is that sample observations result purely from chance. The “alternative” hypothesis is that sample observations are influenced by some non-random cause, which must be explicitly justified.

The null hypothesis is the more straightforward — the only set of statistical probabilities is calculation of chance effects. The null hypothesis must be tested, and rejected, before alternative hypotheses are introduced including how, in effect, the NAPLAN population in the current year could come to differ from the population in the base year. We can find no evidence that the “nature of the difference” analysis followed a rejection of the null hypothesis that the nature of the sample has changed.

The main conclusion is that adding other variables such as those in the “nature of the difference” analysis to the statistical analysis based on conventional standard deviations and measuring the significance of differences is not legitimate without prior testing. There are some relatively minor changes including improvements in the assessment of students in Queensland, WA, Tasmania and perhaps the two territories, and among Indigenous students, which are causing performance standards in these groups to get closer to other states and to non-indigenous people. There is still a long way to go, and adopting a method which is in effect adding a new element without proper justification is not the way to proceed.

Some differences are real, of course, and show up year after year – in fact, the annual NAPLAN numbers show remarkably consistent patterns which vouch for their statistical compatibility. 21In Table A2, girls consistently exceed the average reading achievements of boys by 1-2%, but the average boy is marginally (1%) better at numeracy.

Table A3 shows that Indigenous students consistently lag considerably behind non-indigenous students, but the difference is gradually reduced between year 3 and year 9 — possibly associated with dropout of the most disadvantaged groups (because they don’t take the test, and allegedly have been occasionally encouraged not to sit for the test in the interest of better overall scores for the school, or because some have left school by year 9).

There are also real differences in the performance of students in different states and territories (Table A4). The ACT consistently leads, with Victoria and then NSW following, ahead of Queensland, SA, WA and Tasmania. The Northern Territory is consistently at the bottom.

All this is of course valuable, as acknowledged in the body of this report, but it doesn’t warrant a constant check through an expensive annual survey.

Appendix 2: Summaries of Two Academic Views on NAPLAN

Honorary Research Fellow at the Australian Institute of Business Wellbeing of the University of Wollongong Dr Justin Coulson is a psychology researcher, author and speaker. His opinion piece in The Daily Telegraph 20 May 2015 is headed, ‘Just admit it: Naplan is a complete failure’. Full article here.

“NAPLAN 2015 begins on May 12 in schools around Australia. This annual bureaucratic extravagance in the name of quality education and ¬enhanced transparency will, by some conservative estimates, cost Australian taxpayers $100 million.”

“What do we get for our $100 million? Improved teaching standards? Greater insight into school performance? Increased student interest in learning? Enhanced student resources or boosts to teacher and student wellbeing? Better resources?”

“Sadly, Naplan delivers little, if any, educational value. In a report for the Whitlam Institute, Professor John Polesel described how the test is shown to possess poor reliability. The tests provide poor quality data. There is widespread anecdotal evidence of cheating and other breaches of testing protocol (such as schools asking poor-performing students to remain at home so as not to lower the school’s score on the My School website).” End of Coulson quote. Comments from the Polesel review follow.

In 2012, Professor Polesel headed a team with colleagues Nicky Dulfer and Dr Malcolm Turnbull from the University of Melbourne which completed a literature review commissioned by the Whitlam Institute of the University of Western Sydney.

“Considerable evidence may be found in the international literature regarding the negative impact of high stakes testing on students’ well-being, including its potential to impact on students’ self-esteem and lower teachers’ expectations of children. There is also evidence of negative effects on service delivery and professional-parent relationships and stress, anxiety, pressure and fear experienced by students.”

“Detailed findings such as these are not available in the Australian context, although similar concerns regarding NAPLAN have emerged from various sources, including a recent Australian survey of principals and teachers in independent schools, the recent Senate hearing into NAPLAN testing and reporting and a recent Queensland Studies Authority report, which expressed concern at the capacity of full cohort testing to lower the self-esteem, self-image and long-term confidence of under-performing students, thus widening the gap between them and higher achieving peers.”

We add that the adverse evidence on NAPLAN has accumulated since the Polesel review was completed in 2012.

Appendix 3: And One from the Coalface

“I teach Year 5/6 at a public primary school in the A.C.T. This submission is based on my experience with NAPLAN testing (I administered NAPLAN to the Year 5 cohort in my school this year [2013] and have taught Year 5 students each year of my teaching career so far) and discussions with teaching colleagues. The main purpose of this submission is to make the point that NAPLAN is not a useful diagnostic assessment tool for teachers and it uses valuable funding which could be better spent elsewhere.”

“The results of NAPLAN do not tell me, or any of the other teachers in my school, anything we do not already know about the students we teach.“

“This quote and cartoon best sums up how effective a standardised test is in assessing the true abilities of a child (there is some contention surrounding whether or not Einstein actually said it but it is still applicable):”

“Last year, as a staff, we analysed our NAPLAN results to determine which areas we needed to ‘target’ for future teaching. The test was administered in May and we were doing our analysis more than 3 months later. Spending almost an hour trying to work out why so many of the year 3 students chose the same wrong answer for a multiple choice question about the use of an ellipsis was, in my opinion, a complete waste of time which would have been better spent on planning lessons or preparing resources.”

“The cost of implementing NAPLAN each year has been estimated to be around $100 million. My school can no longer afford to have a teacher librarian on staff. $100 million would certainly go a long way towards ensuring every school had a teacher librarian who fully utilised the limitless potential of the school library to foster a love of literacy and learning in every student.”

“There is A LOT more to the success and effectiveness of a school than the results of four tests administered once a year that narrowly assess a miniscule part of what children should be able to do and what they should be learning. NAPLAN provides interesting and useful population data but it certainly does not give a true indication of the knowledge and ability of a child. Education is about the whole student, and the stakeholders of our education system have a duty to provide our students with a world class one.”

Remana Dearden, 7 June 2013: From ‘The effectiveness of the National Assessment Program – Literacy and Numeracy: Submission 68’ (to NAPLAN Senate Inquiry).

Appendix 4: Comments on an ABC Program

The ABC morning radio program Life Matters featured NAPLAN 2015 on 19 May, shortly after the conclusion of the tests. It was presented by Natasha Mitchell and produced by Linda Raine. It prompted a large number of comments in the week after the tests, raising a number of points that couldn’t be treated in the body of this paper. The excerpts do not pretend to be analytic. A visit to the ABC website contains the full comments and the audio of the program.22

  • There is so much we (I’m a teacher) could do if the money spent on NAPLAN was spent on resources for helping kids who need it. Such a waste of resources.
  • I am a teacher and a parent. … There are already ‘snapshots’ available of how Australian students are going — think the OECD PISA tests, for one. But MOST importantly, think of the people who are qualified and involved — the teachers. The teachers, above anyone else, have the skills to assess a student’s progress. Given the huge amount of money Naplan costs every year, would it not be better to spend this money on teacher training, teachers aids to assist in the classroom, relief teachers for classroom teacher lesson preparation time and better salaries to make a career in teaching desirable to our top graduates, as is done in the countries that rate highest in the PISA tests. Thus ‘value adding’ to our education system and our society as a whole.
  • PISA tests are a controversial way of comparing some Australian students to some students in other countries. But doesn’t the Australian government need a standardised test mechanism to measure across-the board student progress and achievement in our own country? What other controlled and standardised measure could it use to evaluate policy and funding allocation?
  • My son has ASD (autism spectrum disorder) and last year was asked to stay home from NAPLAN year 9. So I wonder what is the data collected for the kids with disabilities.
  • Interestingly, my son is also autistic and is encouraged to participate in NAPLAN. We live in a rural area and were told that the schools outside the metro areas have to strive to do as badly as possible in order to gain funding.
  • Seems like private schools use NAPLAN as a tool used to decide whether to offer kids a place, why else would they ask to see them? Three schools we were interested in asked for NAPLAN results. Surely this isn’t what the tests are designed for! My daughter missed out on Year 5 test as she was ill, I certainly wasn’t going to send her in unwell to do the tests. Luckily she has a place at a school, although they did ask to see her Year 3 Naplan results.
  • If Naplan tests are not used to assess the child as an individual why have all private high schools I have enrolled my daughter for in Grade 7 asked for her Year 5 Naplan results.
  • I speak as a social researcher myself here — number crunchers will always welcome more numbers. Data is their professional lifeblood. And all data is, in its way, interesting. But not all data is ethical or socially constructive to collect and/or to publicly disseminate. Informed consent and ‘first, do no harm’ should be basic principles for all social scientists. Clearly, both dimensions are at issue here in the practice (if not the theory) of NAPLAN testing. Finally, Natasha suggested ‘all data speaks’. Well, yes, but some data mumbles; some data vacillates and misleads; there is data that ‘speaks’ first and foremost to itself … . NAPLAN is deeply flawed, frequently abused, and quite clearly from this program, socially divisive. What I would say to those responsible for its administration and continued use (some of whom, no doubt, mean well): back to the drawing board!
  • NAPLAN is like all assessments: it requires careful design, on-going monitoring, evaluation and fine-tuning. I think that this does happen, and that ACARA is capable of responding appropriately to teachers’ feedback. … In my state, Victoria, children have been given system-wide standardised tests for benchmarking since 2003. In 2008 Victorian children moved to the federal NAPLAN tests. No drama, no controversy, no anxiety, no hysteria, no Life Matters segments, no gaming of the system until 2010 when the My School website was launched and some NAPLAN data was tailored to be publicly available on the site for comparison purposes. I believe that this continues to be a very bad idea. Politicians have spurious justifications for publishing NAPLAN data, but My School has not led to better programs and funding to improve educational standards.
  • Thank you RN Life Matters for inviting me to briefly discuss why we have consistently withdrawn our children from NAPLAN. At the conclusion of our very brief (and nerve wracking!) chat I suggested that everyone should read the Whitlam Institute’s report “The Experience of Education: The impacts of high-stakes testing on school students and their families”. These are the concluding lines from the report: “Although NAPLAN testing is designed to improve the quality of education children and young people receive in Australia, its implementation, uses and misuses mean that it undermines quality education, and it does harm that is not in the best interests of Australian children.” I urge all parents to think critically about NAPLAN and make an informed decision.
  • NAPLAN costs millions of dollars to administer … that could be so much more effectively used in redressing need and inequality of outcomes. It is true to say however that giant monopolising corporations such as education publishers, computer software and hardware manufactures, and computer-driven marking of tests (Murdoch, Gates, Pearsons) are pushing the standardised testing agenda. They are set to make billions from this global regime. In the USA, where standardised testing is extremely malicious and widespread, it is fuelling a reaction but not from the top. Parents who have witnessed the destruction of their schools and children are beginning to organise with teachers to oppose what is a corporate and politically-driven control agenda.
  • Those of us who are knowledgeable in these things know only too well that if we do not resist, the NAPLAN definition of education will expand to all our detriment. Teaching and learning is very complex and more so than ever. I cannot think of many other jobs where workers are so persistently judged and assessed, day in and day out. Parents who say ‘harden up’ and ‘just do it’, ‘that it is life’, use the argument that justified the cane for giving a wrong answer. Today we do not accept the use of the cane just because that is the way it was.
  • Data is useful, but why test every child? As well as NAP-LAN, ACARA also runs the NAP-Science, NAP-ICT and NAP-Civics & Citizenship tests. Most people have never heard of them because they’re sample tests. It’s a big sample — about 12,000 students across the country — plenty big enough to give reliable data. Because they’re not publishing data about every school, they’re not high-stakes. Hence, they have the benefits of NAPLAN without all the negative unintended consequences of high-stakes testing.
  • The teacher who withdraws her children can see the test is indirectly a test of her efforts. When she says Naplan ignores content she implies the skills that should be developed with that content are unimportant.
  • The skills learnt in sitting the test are far more important than the results actually achieved. And for that reason cost in implementing the test should be minimal and more money spent on providing our children a wider education in a more diverse way than NAPLAN. Although some argue that the results are used to directly improve education I would argue they are used to help children get better results in NAPLAN tests the next time: “teaching to the test”. Maybe this is a skill our children need to learn to get through some aspects in life.
  • My child was also streamed according to her NAPLAN test yet she is preforming way above her NAPLAN result in the classroom tests. So is it really measuring what it is meant to measure?
  • What I think the children need is a more diverse education than what NAPLAN provides and the amount of time and energy focused on Naplan by schools and children is for one reason only – the “My School” website.

Author

Hans Hoegh-Guldberg, on Knowledge Base 2 October 2015.


References

  1. High-stakes tests have direct consequences for passing or failing — something is “at stake”. The stakes may have a direct impact on the person for passing or failing a high-school diploma or a driving test. But a test may be “high-stakes” based on consequences for others beyond the individual test-taker, as is the case with NAPLAN and similar tests. For example, if enough students at the same school fail a test, then the school’s reputation and accreditation may be in jeopardy. Testing under the US No Child Left Behind Act has no direct negative consequences for failing students, but potentially serious consequences for their schools, including loss of accreditation, funding, teacher pay, teacher employment, or changes to the school’s management. The stakes are therefore high for the school, but low for the individual test-takers (Wikipedia: High-stakes testing). The impact on teachers and students in such cases is associated largely with the anxiety of educational institutions, which doesn’t reduce the pressure.↩︎
  2. Audrey L. Amrein, A.L. and David C. Berliner (28 March 2002), ‘High-stakes testing, uncertainty, and student learning’. Education Policy Analysis Archives, 10(18), p 3.↩︎
  3. In 2014, 1,039,412 students were tested for reading, including 50,094 classified as Indigenous (4.8%). The totals for numeracy were similar. The Indigenous ratio for reading declined by year-level from 5.1% in year 3, 4.9% in year 5, 4.8% in year 7, to 4.3% in year 9. This would be due to two factors: a higher dropout rate for Indigenous students, and higher birth rates adding to the numbers from younger year-levels. The proportions found in NAPLAN 2014 compare with 3.0% nominated as Indigenous in the 2011 Population Census. The difference may also be partly due to Indigenous people choosing or not to nominate as such in the Census, whereas the schools presumably make the distinction in the NAPLAN tests).↩︎
  4. Australian Curriculum, Assessment and Reporting Authority 2014, NAPLAN Achievement in Reading, Persuasive Writing, Language Conventions and Numeracy: National Report for 2014, ACARA, Sydney. P 4). There is also a huge 656-page technical report which contains nothing but statistical tables and graphs. “The main aim of th[e] report is to describe and document the methodology used to produce the NAPLAN 2014 assessment scales and the methodology used to report on the performance of the 2014 student cohort.” (p 1) The critique in this paper is based solely on the National Report. The two reports together create a feeling of statistical overkill — not a common charge be directed at official reports. Our own view, and that of many other commentators, is that the NAPLAN tests have distanced themselves from the true mission of school education in Australia. Rightly or wrongly, this view is conveyed to all readers of this paper.↩︎
  5. In 2011, a new persuasive writing scale replaced the previous format of narrative writing. As the two scales differ, there is a break in the time series data. The persuasive writing results should not be directly compared to the narrative writing results, and is outside the scope of this paper which concentrates on the time series for reading and numeracy.↩︎
  6. The annual statistics also cover participation rates, which are not analysed in detail in this already wide-ranging paper despite their special importance in monitoring Indigenous students geographically and by year-level.↩︎
  7. There are more groups, including “LBOTE status” (language background other than English), parental education and parental occupation. “Geolocation”, which indicates the relative remoteness of schools ranging from metropolitan to very remote, is shown separately for Indigenous and non-indigenous students. Generally, these results back up the main findings in this paper: geolocation favours students in metropolitan schools, with the disadvantage increasing the more remote the school is; a non-English-language background on average is associated with some disadvantage; parental education and parental occupation are plus factors for school students. Most disturbingly, however, the almost 5% of students identified as Indigenous are by far the most handicapped of all groups, especially in remote locations. NAPLAN is commended for making this abundantly clear as a major educational policy issue.↩︎
  8. During her time as Australian Minister for Education, when she became a strong advocate for an “education revolution” before becoming Prime Minister in 2010.↩︎
  9. The 2011 Census shows NT to be way out in front with 29.8% of the population being Indigenous. Tasmania was second (4.7%) ahead of Queensland (4.2%) and WA (3.8%). They were followed by NSW (2.9%), SA (2.3%), ACT (1.7%), and Victoria (0.9%). The average for Australia was 3.0% nominating themselves as Indigenous people.↩︎
  10. The MCEECDYA Schools Geographic Location Classification System is based on the locality of individual schools and is used to disaggregate data according to Metropolitan, Provincial, Remote and Very Remote (NAPLAN 2014). The acronym stands for Ministerial Council on Education, Employment, Training and Youth Affairs. Roger Jones submitted a technical report on Performance Measurement and Reporting Taskforce in November 2004, but did not present the relevant school geolocation statistics. Anecdotal evidence and observation identifies the NT, WA and Queensland as the areas with relatively most remote and very remote schools. For example, 82 schools are classified as such in the Alice Springs, Arnhem, Barkly, Katherine and Palmerston/Rural regions of the Northern Territory. “Remote” is defined in Queensland as locations more than three hours’ drive from a regional/provincial or larger town. Half of Queensland’s schools are located in rural and remote areas.↩︎
  11. Or if it increases at a regular rate around the middle year which isn’t the case here.↩︎
  12. DEEWR Question No. EW0076_14, EW0076_14.pdf. DEEWR was the former Australian Government Department of Education, Employment and Workplace Relations. Education is currently (2015) part of the Department of Education and Training, with Senator Simon Birmingham in charge in the Turnbull cabinet.↩︎
  13. This Tranby is a Uniting Church co-ed school in Perth catering for all years K-12. There is another Tranby school in Sydney, a college for Indigenous people in the inner suburb of Glebe.↩︎
  14. ‘Robot grading of NAPLAN tests will further reduce their effectiveness’, The Age, Melbourne, 12 May 2015 1↩︎
  15. ’NAPLAN: Parents and teachers urged to calm students down’, Sydney Morning Herald, 11 May 2015 2↩︎
  16. Hans Hoegh-Guldberg, ‘Four Kinds of Economic Capital’. Notes for an address to the Industrial Innovation, Entrepreneurship and Management Post-graduate Programme, University of Trinidad and Tobago, 8 February 2007.↩︎
  17. UNESCO Institute of Statistics, 2009 UNESCO Framework for Cultural Statistics (http://www.uis.unesco.org/culture/Documents/framework-cultural-statistics-culture-2009-en.pdf). The additions to the framework compared with the previous edition in 1986 are revealing, “[taking] into account new concepts that have emerged since 1986 in the field of culture, including those related to new technologies — which have so drastically transformed culture and the ways it is accessed — intangible heritage, and evolving cultural practices and policies.” (Foreword, p iii).↩︎
  18. The Commonwealth Scientific and Industrial Research Organisation.↩︎
  19. It makes no analytic difference but we have expressed genders as percentages of total students, but Indigenous as a ratio of non-indigenous numbers.↩︎
  20. The 2014 NAPLAN report explains that the “nature of the difference” combines the results of statistical significance-testing between two groups (for example 2008 and 2014) and the results of “effect size” based on the magnitude of the difference. The “nature of the difference” refers to whether: 1) the difference is statistically significant at the 5% level and 2) the effect size for the difference is of sufficient size to be worth further consideration. An effect size considers the difference between means in relation to the spread of scores for the groups to which those means refer. Differences between means and effect size greater than 0.5 (i.e. more than half the spread) is considered to be ‘substantial’ and an effect size between 0.2 and 0.5 (i.e. more than one fifth of the spread) is considered to be ‘moderate’. (NAPLAN 2014, p 300.)↩︎
  21. Consistency in statistical treatment, however, is not sufficient reason for substituting a pure statistical analysis of a test generally unrelated to the main curricula, for the educational value of running integrated curricula.↩︎
  22. This program was largely critical of NAPLAN (roughly 15-20% of the comments in favour, 75-80% critical), but another program in the ABC TV1 series Counterpoint takes a more positive view. It was broadcast on 9 September 2015 but hadn’t attracted comments by the end of the month when this paper was concluded.↩︎

Hans founded his own consulting firm, Economic Strategies Pty Ltd, in 1984, following 25 years with larger organisations. He specialised from the outset in applied cultural economics — one of his first major projects was The Australian Music Industry for the Music Board of the Australia Council (published in 1987), which also marks his first connection with Richard Letts who was the Director of the Music Board in the mid-1980s. Hans first assisted the Music Council of Australia in 2000 and between 2006 and 2008 proposed and developed the Knowledge Base, returning in an active capacity as its editor in 2011. In November 2013 the Knowledge Base was transferred to The Music Trust, with MCA's full cooperation.

Between 2000 and 2010 Hans also authored or co-authored several major domestic and international climate change projects, using scenario planning techniques to develop alternative long-term futures. He has for several years been exploring the similarities between the economics of cultural and ecological change, and their continued lack of political clout which is to a large extent due to conventional GDP data being unable to measure the true value of our cultural and environmental capital. This was announced as a major scenario-planning project for The Music Trust in March 2014 (articles of particular relevance to the project are marked *, below).

No comment yet, add your voice below!


Add a Comment

Your email address will not be published. Required fields are marked *