positive partnerships evaluation

Positive Partnerships is a Federal Government initiative supporting school-aged children with autism and those who care for them. Since its inception in 2007, the Federal Government has invested more than $63 million into the program.

dandolo was engaged by the Australian Department of Education and Training in the first half of 2019 to evaluate Phase 3 of the Positive Partnerships program, which was delivered by Autism Spectrum Australia (‘Aspect’).

Laura Williams interviewed Dr Michaella Richards to learn more.

LW: Positive Partnerships had been operating for 12 years before we were contracted to conduct an evaluation. Can you give me some context for the project and an overview of our role in it?

MR: In Australia, about 1 in every 70 people has autism. The needs of individuals on the autism spectrum are highly complex and individualised, and students with autism are four-times more likely than their peers to require additional learning and social support services.

Positive Partnerships grew out of the ‘Helping Children with Autism’ funding package and is focused on strengthening the relationship between schools and families to improve the educational outcomes of school-aged students on the autism spectrum. This is achieved through a suite of face-to-face and online programs with parents, carers and school staff.

The program is well-established and -resourced; a rare Federal Government intervention into the primary and secondary education sectors. Given that autism afflicts all communities equally, it’s a universally popular initiative. 

There have been three distinct phases during the project’s 12 years of operations:

  • Phase 1, which ran from 2008-12, was weighted more heavily towards providing professional development opportunities for school staff.

  • Phase 2, 2012-15, involved more programming targeted at parents and carers.

  • Phase 3, 2015-19, saw a reduction in professional development expectations for teachers and the introduction of training combining the cohorts.

Both Phase 1 and Phase 2 were evaluated by separate parties.

We came in at the end of Phase 3 to:

  • Ensure that funding had been used efficiently, effectively and economically; and

  • Inform decisions about the scope and development of a potential fourth phase of the Positive Partnerships Program.

 

LW: How did stepping in to evaluate a program that had already been reviewed twice impact your approach?

MR: It had quite an impact on our methodology. We didn’t want to reinvent the wheel or over-consult by repeating work that had already been conducted, and we were conscious that there was a lot of existing information spread between the previous evaluations, our client and the project delivery partner. We focused much more on synthesising information and analysing gaps.

We split our fieldwork in to two stages that broadly aligned with the elements of our brief. The first focused on evaluating the program’s performance to date; the second focused on the future directions the program should take.

In the first stage, we pulled together the disparate pieces of information and engaged with experts to develop a best practice framework against which we could evaluate Positive Partnerships. We were able to show that:

There was ongoing unmet demand for an intervention to support educational outcomes for students with autism;

  • Focusing on home-school partnerships was an effective way to do this;

  • The program resources were respected and useful;

  • The program was broadly delivered in-line with best practice principles.

In the second stage, we considered how Positive Partnerships should evolve in response to major changes to the operating environment for autism support. This involved using a ‘first principles’ approach to revisit fundamental questions around the program’s value, target market, solutions and delivery.

Using our gap analysis from Stage 1, we were also able to engage subsets of the population that hadn’t been included in previous rounds of consultation. This was important, as it meant we were able to make observations based on a more representation sample of people involved with the program’s entire life cycle. 

LW: What were some of the key take-aways from this process?

MR: The main value was in identifying the “secret sauce” that made Positive Partnerships an effective program and making recommendations to tighten the program’s focus on the areas in which it could make the most difference.

Positive Partnerships had been conceived at a time when there weren’t many other service providers working on improving educational outcomes for children with autism. Over the course of a decade, however, this landscape had become more crowded and we were starting to see some duplication of efforts.

As a well-regarded, established program, it was also difficult to pinpoint the elements of the program that contributed to its success. Were all the program’s elements critical, or could its efficacy be attributed to a fraction of its operations?

We provided a robust set of recommendations to the Department that clearly spelt out:

  • drivers of Positive Partnership’s impact;

  • areas it should focus on to ensure effective service delivery;

  • more specific definition of the target market;

  • refinements to the suite of products offered;

  • ways to leverage existing government infrastructure;

  • potential to apply the program to other disability areas.

LW: Why was this project iconic for dandolo?

MR: I think that comes down to the scale, visibility and force of emotion around Positive Partnerships.

This was a very popular, very well-funded project. Stakeholders felt very passionately in support of it and there was a lot of anecdotal belief that the program was performing well.

In a public program of this size, however, it’s really important to be able to justify the use of resources. We approached the evaluation from the perspective of our client and the taxpayer to ensure that public funds were used to best effect.

We were able to use a robust evaluation to cut through that and assess if it was still having the intended impact ten years on. What was providing real value, what should we keep doing and what should we stop doing.