1 Introduction

In 1989, the UN Convention on the Rights of the Child (UNCRC) enshrined for the first time in international law the right of the child to have their views heard and given due weight in all matters affecting them. In the intervening period this right, protected under Article 12 of the Convention, is gaining increasing prominence and is core to how we presently engage and work with children. As a right of the child, Government’s hold primary responsibility for its implementation. Progress in the implementation of a child’s right to be heard is aided by improved conceptualisation of Article 12 and a greater understanding of the measures needed to give effect to this right. While much is written on conceptualising Article 12 and the measures required to implement a child’s right to be heard, little has been written on how we measure or evaluate the implementation of this right and how this evaluation process can work in practice; what works well and what are the challenges and pitfalls. The use of human rights’, in this case child rights’, indicators have long been advocated by the United Nations, the European Union and leading scholars in the field (see for example, United Nations Special Rapporteur on Health, 2003; Fundamental Rights Agency 2010; Ennew & Miljeteig, 1996). They are a powerful tool to hold Governments to account (UN Development Programme, 2000). In the context of a child’s right to be heard, Landsdown (2010, 2018) and Landsdown and O'Kane (2014) has pioneered work in this area. Lansdown has consistently advocated that, in addition to creating an environment that is conducive to respect for children’s participation, it is necessary to develop indicators and benchmarks against which to measure the implementation of a child’s right to participate and progress in this regard.

This article documents the development and implementation of a summative evaluation framework used to measure the extent to which a child’s right to be heard was embedded in the culture and operations of Ireland’s National Child and Family Agency, known as Tusla. In documenting the development of the Framework, the article introduces the evaluation study leading to its development. It provides an overview of the Council of Europe Child Participation Assessment Tool and the Lundy Model of Participation, which informed the framework, and outlines the process of selecting the 12 structural, process and outcome indicators to measure the extent to which a child’s right to be heard was embedded in Tusla, the Child and Family Agency. Following this, the article provides an overview of the application of the framework and the quantitative and qualitative data sources used to produce evidence of whether the indicators were achieved. A brief focus on the outcome of the study precedes the concluding discussion on the successes and challenges in the implementation of the framework.

2 Conceptualising and Implementing a Child’s Right to be Heard

Since the adoption of the UNCRC, a widespread practice has emerged colloquially describing the right of a child to be heard as ‘participation’ (UN Committee on the Rights of the Child, 2009). While the term ‘participation’ can mean all things to all people, various models have been developed to unpack the term. Hart’s (1992) ladder of participation arguably being the most widely known. Building on this and previous models, Shier (2001) developed a model capturing different levels of participation, ranging from adults listening to children to children sharing power and responsibility for decision-making. The value of Shier’s model is that it explicitly links the minimum level of participation required by government and government actors to be compliant with Article 12 of the UNCRC, that being, children’s views are taken into account. While Shier (2001) and Hart (1992) contextualise a child’s right to be heard within a wider spectrum of practices referred to as ‘participation’, Lundy (2007), and later Bouma et al. (2018), focused solely on conceptualising children’s participation from a rights perspective. Lundy (2007) developed a model (hereinafter the Lundy Model) to clarify the scope of a practitioners obligations when implementing the child’s right to be heard as set out in Article 12 of the UNCRC. Lundy identified four core concepts relevant to the realisation of this right—space, voice, audience and influence. These four concepts are relevant to the implementation of a child’s right to be heard in both personal and public decision-making. The former referring to the right of the individual child to have their views heard on matters directly affecting their lives and the latter referring to the right of children collectively to have their views heard in public decision-making, such as service planning and review.

As well as improved conceptualisation of Article 12, the implementation of a child’s right to be heard is further aided by improved understanding of the measures to be taken to give effect to this right. The UNCRC obliges State parties to the Convention to undertake all appropriate legislative and administrative measures for the implementation of children’s rights (Article 4). The guidance issued by the UN Committee on the Rights of the Child elaborates further. General Comment 12 explains that for State parties to meet their obligations under Article 12 they should adopt strategies, such as, providing training on article 12 and its application in practice for all professionals, ensure appropriate conditions for supporting children to express their views, ensuring these are given due weight, and combat negative attitudes through public campaigns (UN Committee on the Rights of the Child, 2009, para 49). Beyond the documentation of the UN Committee on the Rights of the Child, there is a wealth of literature on how to create the appropriate conditions to support the implementation of a child’s right to be heard (see for example, Save the Children, 2005; Bell, 2011; Kennan et al., 2019). Moreover, models have emerged detailing the measures required to embed what is termed a ‘culture of participation’ within an organisation (Kirby et al., 2003; Wright et al., 2006) or the ‘sustained realisation’ of children’s participation rights (Landsdown, 2010, p. 14).

3 Background to the Evaluation Study

Ireland’s National Child and Family Agency, Tusla, is the dedicated State agency responsible for improving wellbeing and outcomes for children in Ireland. With a staff of approximately 4300, it delivers a range of universal and targeted services, including: child protection and welfare services; alternative care; educational welfare services; psychological services; family and locally-based community supports; early years services; and domestic, sexual and gender-based violence services. With the support of a philanthropic organisation, The Atlantic Philanthropies, Tusla initiated in 2015 a programme to develop and mainstream prevention, partnership and family support practices within the Agency. This Prevention, Partnership and Family Support Programme (PPFS Programme) was intended to achieve a range of positive organisational, parent and child focused outcomes. One of the primary intended outcomes was that the participation of children would be embedded in Tulsa’s culture and operations. The Lundy model (2007) underpinned Tusla’s conceptualisation of the term participation. To achieve this intended outcome of the PPFS programme, Tusla implemented a Child and Youth Participation Programme of Work. This ambitious programme included many of the known measures required to embed a culture of participation. It comprised, the development of a national participation strategy; participation structures to support children to have their views heard in service planning and review; a child and youth participation training programme for all staff; child-friendly complaints and feedback mechanisms; initiatives to raise awareness among children of their right to participate; and a children’s participation quality assurance process.

In 2016, the authors’ research centre was funded by The Atlantic Philanthropies to undertake a three-year programme of research to evaluate to what extent the intended outcome, that the participation of children would be embedded in Tusla culture and operations, was achieved. To assess the extent to which the child’s right to he heard was embedded within the agency and to track progress over the period of implementation of the Tusla Child and Youth Participation Programme of Work, an evaluation framework was developed. As detailed below, this framework comprised seven structural and process indicators adapted from the Council of Europe Child Participation Assessment Tool and five outcome indicators informed by the Lundy Model. The overarching evaluation design was a mixed methods baseline (2014–2015) and follow up (2016–2018) study (Kennan et al., 2017; Tierney et al., 2018).

4 Council of Europe Child Participation Assessment Tool and The Lundy Model of Participation

In 2012, the Council of Europe Committee of Ministers adopted Recommendation CM/Rec(2012)2 on the participation of children under the age of 18. This Recommendation reiterated the necessity of effectively implementing States binding international commitments, specifically recalling Article 12 of the UNCRC. Four years later, the Council of Europe published its Child Participation Assessment Tool to support member States to assess their compliance with Recommendation CM/Rec(2012)2 by providing a set of indicators against which States can measure progress in implementing a child’s right to be heard (Council of Europe, 2016). Coordinated by the Council of Europe and informed by the work of Lansdown (2009), the tool was developed with partners in international organisations, civil society, academia, youth and parent associations.

The Child Participation Assessment Tool identifies 10 indicators clustered into three groups reflecting the broad measures States are required to take to comply with Recommendation CM/Rec(2012)2: measures to protect the right to participate; measures to promote the right and measures to create spaces for participation. It was envisaged that the tool would enable States to undertake a baseline assessment of current implementation, identify actions needed to achieve further compliance and measure progress over time (Council of Europe, 2016, p. 5). The indicators are categorised as structural and process indicators. According to the Council of Europe, structural indicators in this context indicate a commitment to take action generally referring to the presence of institutions and policies intended to support the realisation of children’s right to participate. Process indicators refer to actions taken, generally focusing on “specific activities, resources or initiatives to ensure children’s participation rights” (Council of Europe, 2016, p. 6).

The Lundy Model was developed to clarify the scope of a practitioner’s obligations when implementing the child’s right to be heard as protected under Article 12 of the UNCRC. It unpacks four interrelated elements of this right, which must be achieved to realise the child’s right to be heard in practice. First, ‘space’: children must be provided with the opportunity to express a view in a space that is safe and inclusive. Second, ‘voice’: children must be facilitated to express their view. Third, ‘audience’: the view must be listened to. Fourth, ‘influence’: the view must be acted upon, as appropriate (p. 933). Providing children with the information they require to form a view, is another important step in the fulfilment of this right (Department of Children & Youth Affairs, 2015; Lansdown, 2009; Lundy, 2007, 2018). While not incorporated within her original model, Lundy’s later work also makes it explicit that compliance with Article 12 of the UNCRC requires children to be provided with feedback explaining the reasons for decisions taken (Department of Children & Youth Affairs, 2015; Lundy, 2018; Department of Children, Equality, Disability, Integration and Youth, 2021).

5 The Evaluation Framework

As the Council of Europe Child Participation Assessment Tool was designed to monitor government’s implementation of a child’s right to be heard, it provided the research team with an established set of indicators to assess Tusla’s implementation of this right. However, not all the Council of Europe indicators and all components of the indicators were directly relevant to a government agency with Tusla’s remit. Therefore, the indicators were adapted and refined, as follows. Two of the indicators were not included in the study framework. These indicators were, “an independent children’s rights institution is in place and protected by law” and “children are supported to participate in the monitoring of the UNCRC (including in CRC shadow reporting) and relevant Council of Europe instruments and conventions”. Responsibility for meeting these obligations falls within the remit of the parent Government Department of Children, Equality, Disability, Integration and Youth. The indicators from the Council of Europe Tool focusing on whether there are child-friendly complaints and feedback mechanisms in place, were amalgamated as there is one mechanism in Tusla comprising both complaints and feedback procedures. The fourth Council of Europe indicator focuses on the existence of mechanisms to enable children to exercise their right to participate safely in judicial and administrative proceedings. This indicator was refined to focus solely on procedures to enable children to exercise their right to participate safely in administrative proceedings. While fundamental decisions concerning a child’s care and protection are taken in the course of judicial proceedings, a focus on such proceedings fell outside the scope of this study.Footnote 1 The set of seven indicators drawn from the Council of Europe Child Participation Assessment Tool, with the adaptations explained above, were clustered into structural and process indicators as set out in Fig. 1. These structural and process indicators were designed to measure the presence or absence of an enabling environment supporting the participation of children.

Fig. 1
figure 1

The development of the evaluation framework

Outcome indicators for measuring the scope, quality and impact of child participation are also required (Lansdown, 2009). The Lundy model provided a useful tool to develop a set of indicators to measure the scope and quality of children’s participation within Tusla. Specifically, to assess whether participatory practices within Tusla were compliant with Article 12 of the UNCRC. Drawing on the Lundy Model, and further informed by the emphasis Lundy places on the importance of children being provided with the information they require to form a view, as well as, being provided with feedback explaining the reasons for decisions taken, the research team developed a set of five outcome focused indicators. All 12 indicators, comprising the evaluation framework, facilitated the research team to measure the extent to which the participation of children was embedded in Tusla culture and operations and to track progress over the period of implementation of the Tusla Child and Youth Participation Programme of Work.

6 Application of the Framework

The ensuing research involved implementing a mixed-methods baseline and follow-up study, using the indicator set to measure whether a child’s right to be heard was embedded in the culture and operations of Tusla. At both baseline and follow-up, documentary analysis was the methodological approach used to generate the evidence required to determine the extent to which the structural and process indicators were being achieved. Quantitative and qualitative research was undertaken to generate a comprehensive knowledge base to determine whether the outcomes indicators were being met, from the prospective of Tusla professionals and children using Tusla services. The baseline study commenced in the last quarter of 2015. With aspects of the PPFS Child and Youth Participation Programme of Work due to be implemented mid-2016, baseline data was collected prior to the implementation of the Programme. Follow-up data collection proceeded the baseline study and ceased in the first quarter of 2018.

6.1 Generating Evidence for the Structural and Process Indicator Set

Structural indicators are framed to invoke a verifiable yes or no answer as to whether key mechanisms and structures are in place (Downes, 2018; UN Special Rapporteur on Health, 2003). Similarly, process indicators provide information on the efforts made and actions taken in the realisation of children’s rights, thereby focused on the presence or absence of activities, resources and initiatives (Vaghri et al., 2011). To assess whether the structural and process indictors were being met, the study reviewed legal and policy instruments, Tusla standard operating procedures and official Tusla documentation publicly available (see Table 1). In the analysis of this documentation the indicator set enabled an objective assessment to be undertaken of the presence or absence of structures or processes in place to facilitate a child’s right to be heard. Where gaps in the information available in public documentation emerged, follow up information checks were conducted with senior personnel in Tusla. For example, at baseline there was no information publicly available on professional training programmes designed to provide Tusla staff with competencies in child participation (relevant to indicator three). A request for this information from Tusla’s National Workforce Learning and Development Office confirmed that there was no such training programme in place at that time. In the write up of the findings, as well as noting the presence or absence of a structure or process, a short descriptor was provided of any action taken. For the purpose of this evaluation study, the documents were not interpreted and no outcomes data was collected to determine the quality of a structure or process in terms of its individual contribution and/or scale of contribution to embedding a child’s right to be heard in Tusla’s culture and operations. These structural and process indicators served the purpose of solely determining if the system had structures and processes in place to build an enabling environment to facilitate a child’s right to be heard.Footnote 2

Table 1 Mapping the research evidence onto the evaluation framework to assess the implementation of a child’s right to be heard

6.2 Generating Evidence for the Outcomes Indicator Set

To establish if the outcome indicators were achieved or progress made towards their achievement, a comprehensive quantitative and qualitative research study was undertaken. The indicators were framed in such a way that measuring whether practice was compliant with the Lundy model could be reported both quantitatively and qualitatively. As this indicator set is more subjective in nature, quantitatively reporting how many Tusla professionals self-identified as engaging in practice to achieve the indicators was considered inadequate without supplementing it with descriptive information illustrating the practices they were engaged in. Thereby, allowing for a more thorough assessment of whether their practices were rights based and aligned with the Lundy model. In evaluating whether the indicators were achieved, it was also important to triangulate the perspectives of the professionals with children who were Tusla service users. For these reasons, collating the relevant data required a range of quantitative and qualitative methodological approaches drawing on different data sources. The data collected came from three sources. These were:

  1. 1.

    Inspection reports published by Ireland’s Social Care Inspectorate, the Health and Information Quality Authority (HIQA);

  2. 2.

    Tusla professionals; and

  3. 3.

    Children who were Tusla service users.

Ethical approval was sought for each component of the research study and granted from National University of Ireland Galway Ethics Committee and the Tusla Ethics Review Group.

The inspection reports published by Ireland’s Social Care Inspectorate HIQA document qualitative information on compliance with children’s participation rights standards in child protection and welfare, foster care, residential care, and special care services. The reports are directly informed by the views of children and Tusla staff. The reports provide a rich source of timely qualitative information informed by professionals and children’s lived experience of participation in personal and public decision-making within Tusla child protection, welfare and alternative care services. Importantly, as the reports are published on each of Tusla’s 17 local service areas, cross analysis of the reports enabled a national picture to emerge. At baseline, the evaluation conducted secondary analysis of all HIQA child protection and welfare, foster care, residential care, and special care inspection reports, published in the period 2013–2015 (n = 53). At follow-up, secondary analysis was conducted on all reports, published during the period 2016–2017 (n = 65). Data relevant to the outcome indicator set was extracted from the reports for analysis.

Quantitative data was generated by surveying Tusla professionals nationally at baseline and follow-up. A measure was developed aligned to the outcome indicators. Respondents were asked to rate their compliance with the following six statements. That they: provide information to the child; actively seek their views; support the child to express their views; listen to the views of the child; take their views seriously; and provide feedback on the outcome of the decision-making process. Respondents were asked to rate compliance using a five-point Likert-type scale (definitely true, mostly true, unsure, mostly not true, and definitely not true). The same measure was used for respondents to rate their compliance in the context of children’s participation in decisions concerning their personal welfare, protection and care and children’s public participation in service planning and review. Although the questionnaire was primarily quantitative, open-ended questions were included to allow staff to elaborate on their practice and provide examples of when they actively sought and were influenced by the views of children in personal and public decision-making. Surveying Tusla staff, inclusive of those offering a range of child welfare services, some of which are outside the remit of HIQA inspections,Footnote 3 addressed this data gap and supplemented the qualitative data generated from the secondary analysis of the HIQA inspection reports. The questionnaire was administered to all Tusla staff via their communications department. At baseline, the response rate was 10.4% of the target population (n = 370 Tusla professionals) and at follow-up the response rate was 7% (n = 255 Tusla professionals). The sample at both baseline and follow-up included representation from each of Tusla’s 17 local service areas and was largely proportionate to the breakdown of Tusla staff in terms of job role. The majority of respondents were social workers and social care workers, which comprise the largest category of staff employed by Tusla.

At follow-up, a small-scale qualitative study was conducted with children and young people using Tusla services, to supplement the findings from the secondary analysis of the HIQA inspection reports and further explore children’s experience of participation. Data collection involved one to one interviews with 19 children and young people, 14 females and five males, aged 9–21 years. With the support of Tusla professionals, children and young people were purposively sampled to ensure there were children and young people from Tusla’s four regional service areas and from Tusla’s welfare, child protection, foster care, alternative care and domestic violence services. With a sampling frame of approximately 30,000 children and young people in receipt of Tusla services, it was not the intention to seek the views of a representative sample of children. The purpose of this small-scale qualitative study was to illuminate the individual participation experiences of a small number of young Tusla service users. This component of the evaluation was guided by a youth advisory group, who provided feedback on the design and implementation of this qualitative study.

For the purpose of data management, the qualitative data gathered from the different sources (the HIQA inspection reports, the Tusla professionals and the children) each formed an independent research component. While the data from each of these research components were analysed separately, a similar approach was used to facilitate the three-way triangulation of the full data set at baseline and follow-up. The data were uploaded to Nvivo 10 software for coding. Thematic analysis (Braun & Clarke, 2012) guided the analytical process underpinned by Lundy’s conceptual model of participation. Deductive analysis was guided by themes directly aligned to the indicators. The quantitative data from the national baseline and follow-up questionnaire was imported to SPSS. Descriptive and comparative analysis was conducted on this data set. The qualitative and quantitative study findings were integrated by mapping the research evidence onto the outcome indicators to assess whether participatory practices within Tusla were compliant with the Lundy model of participation and thereby embedding a child’s right to be heard in practice. Table 1 above sets out the data sources and the methodological approach used to generate the evidence required to determine if the outcome indicators were met or met in part. It also details the sample size of each of the different methodological components of the evaluation.

7 Study Outcome

Utilising the set of structural, process and outcome indicators and employing the methodological approaches detailed above to gather the required data, resulted in an evidence-based assessment of the extent to which a child’s right to be heard was embedded within Tusla. The evaluation found strong evidence of children’s participation being embedded in Tulsa’s culture and operations. This finding was grounded in an assessment of whether each of the 12 indicators comprising the evaluation framework were being met or in part met. At the structural, process and outcomes level it tracked progress over the period of the study, evidencing that this progress in embedding a child’s right to be heard at an agency wide level was facilitated by the implementation of the PPFS Child and Youth Participation Programme of Work. While it is outside the remit of this article to detail the evaluation findings, which are reported elsewhere (Tierney et al., 2018), the overarching evaluation findings are set out below to illustrate the validity of the evaluation framework in generating this research evidence.

The structural and process indicators were of value to determine if ‘the building blocks’, as termed by the Council of Europe, were in place to support the implementation of a child’s right to be heard. At the structural level, the baseline study found that Tusla had a strong legislative and policy framework mandating staff to take the views of children into account. Other structures and processes, viewed by the Council of Europe as the necessary building blocks to progress the implementation of children’s participation rights, were found not to be in place at baseline. There was no competency-based training programme delivered to Tusla staff nationally and there were few designated structures to bring children together to have their views heard at a local, regional or national governance level to inform service planning and review. At follow-up, the study tracked significant progress with all structural indicators being met or met in part. Child and Youth Participation Training for staff was being implemented nationally and forums for children in foster care were operating to support children in care to feed their views into policy development and service provision.

At the process level, the baseline study found that Tusla had limited procedures in place to support children to exercise their right to participate safely in administrative proceedings. The documentary analysis of Tusla’s standard operating procedures revealed that children’s participation was often encouraged rather than required. The documentary analysis also found limited activity to promote a child’s right to be heard and there was no child friendly complaints and feedback mechanism in place for children using Tusla services, with the exception of children in foster care. At follow-up, the study again tracked significant progress with all process indicators being met or met in part. The documentary analysis at follow up tracked the implementation of two new national practice approaches, Meitheal and Signs of Safety, which placed children and families at the centre of assessment and decision making in child welfare and child protection proceedings. At follow-up, a child friendly complaints and feedback mechanism was found to be in operation nationally and mechanisms to promote children’s participation in decision-making and provide them with information about their right to be heard were also in place. This included the development of a National Children’s Charter, National Young People’s Charter, the implementation of a Seed Funding initiative to support and raise awareness about innovative participatory practices, annual conferences co-organised with young people to disseminate the learning and outcomes of participatory initiatives and the implementation of the Investing in Children Membership Award™ scheme to award success.

While the structural and process indicators enabled the researchers to assess the presence or absence of the required infrastructure to support the implementation of a child’s right to be heard, the outcome indicators were crucial to determining if this enabling environment was translating into rights based participatory practices being implemented on the ground. They determined how the agency was faring in terms of the quality and scope of participatory practice, from the perspective of children and Tusla professionals at baseline and follow-up. Utilising the Lundy Model to inform these outcome indicators facilitated an assessment on whether practice was compliant with Article 12 of the UNCRC and the quality of compliance with each of its elements as conceptualised by Lundy—space, voice, audience and influence. At baseline the quantitative and qualitative data revealed that there was much good practice happening to support children’s participation in decisions concerning their personal welfare, protection and care and pockets of good practice supporting children’s participation in service planning and review. The follow-up study generated evidence of progress in terms of participation being further mainstreamed, particularly children’s participation in service planning and review. However, weaknesses in the system emerged. There was limited evidence of children being provided with appropriate feedback and limited opportunities being created for the views of children to be heard by the relevant decision-makers. The evaluation also uncovered that the indicators were not being met for all children. It found that there was a lack of resources to support children with additional needs to have their views heard.

8 Critique of the Evaluation Framework

Utilising structural, process and outcome indicators provided a framework for a comprehensive evaluation of the extent to which the child’s right be heard was implemented and embedded in the culture and operations of Ireland’s national Child and Family Agency, Tusla. While the outcome indicators informed by the Lundy model were very useful to assess whether participatory practices within Tusla were compliant with the Lundy model of participation and thereby embedding a child’s right to be heard in practice, they didn’t come without challenges, given the breath of data required to assess compliance with each of the indicators. The size of Tusla as an organisation and the range of services it delivers, contributed to this complexity, as did the dual interpretation of this right of the child. Understanding the implementation of the child’s right to be heard requires a focus on the right of the individual child to have their views heard on personal matters directly affecting their lives as well as the right of children collectively to have their views heard in public decision-making. It was imperative that data were collected across the outcome indicator set to assess the implementation of the child’s right to be heard at the personal and public levels, which added to the breath of data required. It can be assumed that these challenges would be further compounded if the implementation of a child’s right to be heard was at a cross-sectorial level.

No more than any indicator set, the evaluation framework did not provide a complete picture but aided progress to be tracked and identified gaps in the implementation of a child’s right to be heard within Tusla. The measures identified in the documentation of the UN Committee on the Rights of the Child and the wider literature, as the necessary measures to secure the implementation of a child’s right to be heard, were for the most part the backbone to the indicators developed. It emerged, however, during the course of the evaluation that there were grounds to include an additional indicator. Individuals who championed the implementation of the child and youth participation programme of work were crucial facilitators of participatory practice being embedded within the agency (Tierney et al., 2018). Kirby et al. (2003) previously identified the importance of organisational ‘champions’ to assist in the process of embedding a new way of working, while Scheirer (2005) found they were crucial in terms of sustainability. Champions can be adults and children who are enthusiastic about participation and are influential in terms of their capacity to inspire others to action and role model desired behaviour (Kouzes & Posner, 1995; UNICEF, 2005). This evidence of their crucial role in supporting the implementation of a child’s right to be heard, arguably warrants the inclusion of an additional structural indicator. Such an indicator could potentially be framed as the identification of champions at the national, regional and local level who will promote children’s right to participate in decision-making.

There was also some gaps in the data collected. Cognizant of the human rights principles of universality and non-discrimination, it is said that the disaggregation of data is key to understanding which groups of children are having their rights met and which are not (Ennew & Miljeteig, 1996; UN Committee on the Rights of the Child, 2003). Disaggregating the data can illustrate inequalities, bringing to light both direct and indirect forms of discrimination. Disaggregation of data is required as per the social factors affecting a child’s life (Ennew & Miljeteig, 1996). This includes gender, geographical region, ethnicity and disability. The data generated and analysed for the evaluation study shed some light on geographical differences regarding the implementation of a child’s right to be heard and some differences emerged related to care placements. For example, the study found that children in residential care were more likely to receive information to support the implementation of their right to be heard than children in foster care or children accessing child welfare services (Tierney et al., 2018). The data collected and analysed in the HIQA reports, the survey and interviews with children and young people were easily disaggregated in terms of differences in geographical regions and care placements, as the participants were asked to identify their region and the services they delivered or were accessing. Without collecting and analysing data specifically focused on gender differences, ethnicity and children with disabilities it was not possible to disaggregate the data in this regard, but an important consideration for future studies.

9 Conclusion

Over thirty years post ratification of the UNCRC significant attention has been given to conceptualising article 12 and identifying the measurers required to implement the child’s right to be heard. The emphasis now needs to shift to monitoring and evaluating progress in implementation. The development of workable indicators are integral to this process. The evaluation study discussed in this article set out to assess to what extent a child’s right to be heard was embedded in Tusla culture and operations, following the implementation of the PPFS Child and Youth Programme of Work. As set out in this article, to do this a baseline and follow up methodological approach was employed tracking progress using a set of structural, procedural and outcome indicators. These indicators enabled the research team to generate strong evidence of children’s participation being embedded across Tusla’s structures, processes and practices. Engaging in this evaluation process generated important learning for future studies. The purpose of the evaluation framework developed was to assess outcomes at the agency or organisational level. That is, whether children’s participation was embedded at a cultural and operational level within the agency. The agency in focus in this research was Ireland’s national Child and Family Agency, Tusla. However, the indicators are not specific to an agency of this nature and are equally applicable to studies focused on the extent to which participation is embedded in organisations working with the general population or different cohorts of children.

Designed to measure outcomes at an organisational level, the evaluation framework was not designed to measure personal outcomes for children, parents or staff. Lansdown (2010, p. 20) notes that it is not only important to “identify key indicators or benchmarks against which to evaluate evidence of a cultural climate in which the right of children to be heard and taken seriously is firmly established”, but that it is also necessary to measure the “impact of the actual participation in which children are engaged”. In this context, Lansdown is referring to impact in terms of personal outcomes for the children, such as its contribution to positive youth development. Elsewhere, Lansdown and O’Kane (2014) have noted that impact can also be measured in terms of outcomes on parents, outcomes on staff attitudes and behaviours and outcomes on the local community. One methodological possibility to measure outcomes in this regard would be to use a social return on investment (SROI) approach. SROI is an internationally recognised and accredited framework for the measurement of and accounting for the perceived social valueFootnote 4 as expressed by participants, of activities provided for them by an organisation (Jones et al., 2020). Most public, private and third sector organisations can track the number of users, contacts, or customers. Many can provide some evidence that these activities lead to some sort of change. Very few can explain clearly why all this matters. SROI allows for the real value of these activities to be uncovered. This framework holds organisations accountable for the work they do but also ensures that resources are invested for the benefit of the participants, based on what they identify as the most valued social outcomes (Rodriguez et al., 2020). This approach is very much in sync with the Irish “…government’s commitment to the delivery of evidence-based services to children, families and their communities” (Forkan, 2012, p. 190). There is a dearth of literature in this area, underscoring the importance of future research and indicator sets to also measure such outcomes, beyond assessing the extent to which a child’s right to be heard is embedded at an organisational level. Importantly, as well as reporting outcomes, documenting the process of using indicator sets such as these, including the benefits, the challenges and pitfalls along the way is much needed to refine the process.