Skip to main content

An exploration of assessment approaches in a vocational and education training courses in Australia

Abstract

Background

There is a compelling case for strengthening the strategies for assessment of competencies adopted in vocational and education training programs in Australia. A review of assessment in nationally recognised Vocation and Education Training (VET) courses identified a number of critical issues associated with competency assessment. For example, trainers were identified as having difficulties with interpretation, implementation and assessment of the competencies. Trainers are often viewed as capable of training diverse groups of learners, despite the fact that they may not possess in-depth knowledge of the competencies.

Methods

These issues gave rise to trialling three assessment strategies: diagnostic, scenario and simulated assessment in a project that investigated the teaching and learning of numeracy in Vocational Education and Training (VET) courses with high Indigenous student enrolments. The study adopted a mixed methods design participatory collaborative action research and community research to develop a series of case studies. Teachers/trainers/teacher aides (N = 39) and students (N = 231) participated in the project. Nine courses and seven sites were the focus of the study.

Results

The results highlighted the outcomes of using three different assessment approaches to draw inferences about students’ competencies in VET courses. Whilst the CDAT diagnostic assessment did indicate the difficulties that students had in specific areas of maths, it also brought to light some students’ difficulties with reading and comprehension and understanding what the questions were asking. The use of the scenario-based assessment was trialled because of trainers indicating that scenarios were an effective way of assessing students, however this approach proved to a challenge for students. Students who experienced challenges with reading and comprehension found the text scenarios difficult to read and understand, limiting their opportunities to demonstrate their competencies. The simulated assessment allowed for students to demonstrate their understanding of mathematics, for example, surface area, applying what they had learned to a context such as a model.

Conclusions

The trial of the assessment approaches and their relationship to mathematics learning and training courses proved to be effective for the teachers, trainers and students. In doing so, the trial supports the literature about adopting a holistic approach to assessing competency as it provides a more comprehensive view of students’ capabilities.

There is a compelling case for strengthening the strategies for assessment of competencies adopted in vocational and education training programs in Australia (Halliday-Wynes and Misko 2012; Hodge 2014). A review of assessment in nationally recognised vocation and education training (VET) courses identified a number of critical issues associated with competency assessment (Hodge 2014). For example, trainers were identified as having difficulties with interpretation, implementation and assessment of the competencies. Trainers are often viewed as capable of training diverse groups of learners, despite the fact that they may not have possessed in-depth knowledge of the competencies. These issues among others gave rise to investigating three strategies for assessing competencies: diagnostic, scenario and simulated assessment.

Background

The effective numeracy learning for employment by regional and remote Indigenous students in vocational education and training (VET) courses aimed to develop knowledge about regional and remote Indigenous VET students’ numeracy learning that can significantly increase VET course completions and Indigenous employment. Indigenous Australians in regional and remote communities have high unemployment despite shortages of skilled workers in local industries (Clayton et al. 2003; Crawford and Biddle 2017; Griffin 2014). Due to previous low achievement in numeracy, Indigenous VET students fail to graduate from courses that would allow access to these industries and benefit their community. The project aimed to study Indigenous students’ learning of numeracy to develop practical knowledge regarding instructional and assessment approaches to enhance student employment. In addressing these issues, the project was shaped by the following research questions:

  • What practices are used by trainers support Indigenous students’ learning in courses?

It was further informed by two sub-questions

  • What assessment approaches are used to assess students’ competencies in mathematics and numeracy which are embedded in courses?

  • How can trainers better prepare for meeting the diversity of learners using a range of assessment approaches?

For the purposes of this paper the sub-questions are the focus.

Literature review

Assessment of competencies

Since the mid-1980s there has been growing attention to the assessment of competencies in vocational education and training and the role that it plays in meeting the needs of employers and learners (Clayton et al. 2003). This attention has been heightened further in more recent years with the emerging priority of a national approach to continuing education and training because of changing work requirements, an ageing workforce and lengthening working lives (Billett et al. 2015). The current focus on existing models of assessment of competencies is motivated by a concern that they may not be suitable for achieving the aims of vocational education and training as identified in the Annual National Report of the Australian Vocational Education and Training System 2012 (Australian Government 2012). Nor might they be appropriately used to assess and certify learning for workplace contexts. This concern is shared by international agencies and governments (OECD 2012) where concerns extend to considerations of how a quality assurance framework can be used to increase attention to the quality of quantitative and qualitative assessments. In the national report referred to here, Karmel (2012) recommends that

a lack of systemic validation and moderation processes within and between providers and training systems is reducing the level of confidence in the comparability and accuracy of assessments. The tendency on the part of assessors to develop and implement their own assessment tools and materials, as well as system imperatives for assessors to customise assessments to local contexts, may be factors contributing to a reduction in the comparability and accuracy of assessments. The regular use of independent assessors can help to minimise this risk.

Included in the OECD’s (2012) discussions of recommendations is that standards through Australia should be achieved through common assessment procedures to determine whether necessary skills have been acquired.

It might be argued that, in the context of vocational education, concepts such as formative and summative assessment, and simulated assessment might be pertinent (see for example Halliday-Wynes and Misko 2012). Here, it is helpful to keep in mind Sadler’s (2007) observations about learning and assessment.

learners can be said to have learned something when three conditions are satisfied. They must be able to do, on demand, something they could not do before. They have to be able to do it independently of particular others, those others being primarily the teacher and members of a learning group (if any). And they must be able to do it well. Assessment of learning should be directed towards gathering evidence for drawing inferences about capability under these conditions, not the scaffolded conditions.

Studies showed (Craddock and Mathias 2009; Halliday-Wynes and Misko 2012; Taras 2002) that appropriate assessment strategies are a key part of competency development. Further, students reported that they could access counselling in regards to assessment tasks because they found assessment daunting (Halliday-Wynes and Misko 2012). A variety of assessment approaches is seen as good practice as it takes a more holistic approach instead of seeing assessment as discrete parts. A discussion of three approaches follows as it pertains to assessing mathematics. Although drawing on contexts such as schools, this discussion is useful for how such approaches might be useful in VET contexts for assessing competencies.

Diagnostic assessment of mathematics

A diagnostic assessment is ideal for VET learning contexts, as it assesses students’ skills identifies their strengths and weaknesses. That is to say, a diagnostic assessment provides trainers with information about what students know and do not know. With this information, the type of instructional materials and activities to support students’ mathematics learning can then be designed.

Cognitive diagnostic assessment tasks (CDAT) are informed by scientific theoretical frameworks on the cognitions that underpin students’ mathematics learning, which have been identified as missing from traditional testing procedures (Battista 1999; Goldin 2000; Lesh and Kelly 2000; Ball and RAND Mathematics Study Panel 2003). Cognition is the core of substance of understanding and sense-making in mathematics (Baturo 2008).

Cognitive diagnostic assessment tasks are designed to be used in formative and summative assessment, in particular, to identify what mathematical concepts and processes students understand before, during, or at the conclusion of teaching. It provides a vehicle for deepening teachers’ understanding of core ideas in elementary mathematics and consequently to modify or extend their instruction. CDAT provides a springboard for intervention or prevention.

A study by Siemon et al. (2004) Supporting Indigenous students achievement in numeracy explored the impact of authentic (rich) task assessment on middle year Indigenous students’ mathematics achievement in remote schools. The findings indicated that rich tasks were hard for students to access even though the literary demands were low. The study found that a more diagnostic problem task that required less English literacy and used concrete materials appeared to be more effective for students. In Betts et al. (2011) of mathematics teaching and learning in middle school in California a diagnostic assessment was found to have positive effects on students’ outcomes particularly when it led to specific interventions. Halliday-Wynes and Misko (2012) point out that where the assessment of competencies is used it can provide a diagnostic tool for quality assurance purposes.

Authentic assessment: scenario and simulated assessment in VET courses

Gulikers et al. (2008) argue that authentic or work-placement assessment is suitable for vocation education and training contexts. They state that the aim of authentic assessment is to link

learning and working by creating a correspondence between what is assessed in the school and what students need to do in the workplace during an internship or after finishing their education … Authentic assessments are expected to (a) stimulate students to learn more deeply …; (b) stimulate students to develop professionally relevant skills and thinking processes used by professionals …; and (c) motivate students to learn by showing the immediate relevance of that what is learnt for professional practice (pp. 172–173).

There are, however, two points to consider. One is that perception of what is authentic differs between people. The second is that this perception varies for students engaging in authentic assessment tasks. A student’s experience may shape what constitutes authentic assessment.

Although authentic assessment in vocational education and training might seem appropriate (Rush et al. 2010) trainers need to be aware of the biases in them (Bennett 2011). In his discussion of formative assessment, Bennett (2011) asserts that “formative inferences are not only subject to uncertainty, they are also subject to systematic, irrelevant influences that may be associated with gender, race, ethnicity, disability, English language proficiency, or other student characteristics. Put simply, a teacher’s formative actions may be unintentionally biased” (p. 17–18). To reduce these biases, trainers need to recognise their biases and consider “evidence from multiple sources, occasions, and contexts” (p. 18). Such understandings of bias and disadvantage might be applicable when considering assessment in a vocational education and training context.

Methods

The project adopted a mixed methods design aimed at benefitting research participants and included: participatory collaborative action research (Anderson 2017; Kemmis et al. 2013; Lozenski 2014) and, community research (Smith 2012). Participatory collaborative action research refers to is a “collective, self-reflective enquiry undertaken by participants in social situations in order to improve the rationality and justice of their own social and educational practices” (Kemmis et al. 2013, p. 5). Simply communicating information alone does not seem to have a significant impact on a changes in training practice and assessment, therefore, there is a need for action research to identify the effectiveness of the project. Community research is described as an approach that “conveys a much more intimate, human and self-defined space” (Smith 2012, p. 127). Community research relies on and validates the community’s own definitions. As the project is informed by the social at a community level, it is described as “community action research or emancipatory research” (p. 127). A series of collaborative action research case studies to improve numeracy teaching and assessment of Indigenous VET students was developed. The cases focused on the assessment approaches that trainers used in their training of VET students.

Participants

There were 102 students who attempted the cognitive diagnostic assessment task, 35 students attempted the scenario-based assessment and five attempted the simulated task. The students were enrolled in a range of courses at the time of the administering of the assessment including, civil construction, metallurgy, retail. Pseudonyms have been used to protect the identity of participants and specific sites.

Research sites

The research sites where the project was conducted included four regional TAFE Institutes and three regional/remote schools in Queensland Australia. These sites nominated to be involved in the project because of the high numbers of Indigenous students enrolled in their courses.

Data collection techniques

The data gathered in this paper draws on diagnostic test results, scenario-based assessment results and video analysis from simulated assessment.

Cognitive diagnostic assessment tasks

The Cognitive diagnostic assessment tasks (CDAT) was designed to elicit students’ understanding of the important mathematical concepts and processes that are required for processing whole numbers, fractions and probability effectively (Baturo 2008, p. 4). The tasks are designed for use by teachers in formative and summative classroom assessment, namely, to find out what mathematical concepts and processes students understand before, during, or at the conclusion of teaching sequences. Figure 1 below provides examples of fraction items in the assessment.

Fig. 1
figure 1

CDAT fractions example

In this example item 23 asks students whether they can construct the whole from the unit part given. The purpose was to identify that there must be four equal parts so it doesn’t matter whether the students place the four parts differently, that is, on top of each other. Item 24 is about partitioning and asks students to construct the whole into equal parts different ways (flexible thinking) (see Baturo 2008).

Scenario-based assessment

A scenario-based assessment of student mathematics knowledge was designed, written by the research team and trialled in four sites (see Fig. 2 for an example of one item). The team took into consideration the students’ results and trainer feedback from the Cognitive diagnostic assessment administered in 2010. The assessment was designed in consultation with site trainers/teachers who provided feedback on ideas, scenarios and assessment questions. The assessment items were derived from several sources including, Baturo (2008) and Department of Education and Training (2009). The items focused on: (1) whole number, (2) fractions, decimals, percentages and ratio, (3) measurement: time, temperature, mass, volume and capacity, (4) statistics, (5) trigonometry and (6) reading signs.

Fig. 2
figure 2

(Source, Department of Education and Training (2009), RIICCM201A carry out measurements and calculations)

Scenario task—surface area

In this item example students were asked to pretend they were house painters who had to work out how much paint to purchase to paint the external walls in the house plan shown. Its purpose was to identify whether the students could calculator, convert measures, work out the whole area from the measurement parts provided and name the surface area size.

Simulated assessment

The simulated assessment was derived from the Civil Construction trainer’s plan which focused on perimeter and area. Students were involved in a series of lessons where the focus was on identifying the perimeter and area of a range of items, for example, the walls in the classroom. From that experience, students were taken to the course construction workshop and were asked to demonstrate their understandings of perimeter, area, and the use of appropriate tools using a model of a house (see Fig. 3).

Fig. 3
figure 3

Model of a house

This item asks students to demonstrate their knowledge and understandings of perimeter and area to work out the floor space of each of the rooms in the model. Its purpose was to identify whether the students could measure, record and work out the area of each of the rooms and name the area size. The next section provides the analysis and discussion pertaining to the assessment strategies adopted in the project.

Analysis and discussion

Trainers involved in the project indicated their interest in determining the students’ understandings of whole number-place value, fractions and money. The purpose was to identify those students who needed numeracy learning support. Students were asked to complete the CDAT number placement asks which focused on the previously mentioned topics. The students’ responses to each question were totalled within each of the categories, the mean was calculated by dividing the total by the number of students (n = 102). Table 1 details the mean results of whole number, fractions and money.

Table 1 Mean of whole numbers, fractions and money from all sites

When comparing the results across the sites, the students at Site 6 fared reasonably well. One reason for this may have been that the students (n = 23) enrolled in the RATEP Program had completed the compulsory years of schooling and were aiming to become primary school teachers. More work is needed to bring these students to a level that will support them in their future endeavours when they move to university to complete their teaching degree. The whole number results for individual sites were varied. Site 1 results were low with students (n = 4) achieving a mean result of 34% overall. Further work is also needed for Site 4 (n = 3). and Site 7 (n = 3).

The present data suggests that a number of students have difficulties with place value which has most likely influenced the development and learning of other number competencies. If place value concepts are not mastered in the early years of schooling, it is likely that students in later education contexts such as VET (Moeller et al. 2011) as well as post educational experiences (Parsons and Bynner 2005). However, we would like to make explicit that although the current results indicate that more work is needed to support students, the influences of culture an language need to be taking into consideration (see for example, Jordan and Levine 2009). This aspect will be addressed later in this analysis.

Of the responses for fractions, the results indicated that they were a source of difficulty particularly for Site 1 (n = 4), Site 4 (n = 3), and Site 7 (n = 3). Figure 4 provides three examples of students’ responses to questions 23 and four from the CDAT test.

Fig. 4
figure 4

Students’ responses to common fractions items

Question 23 provides one part of a ribbon with students expected to draw the whole ribbon. They are expected to draw the fractional parts. The ability to draw equal parts is seen as critical to understanding the logical development of part–part, part-whole and whole-part relationships and notions of equality and inequality (Lamon 1996, 2012). This ability may also influence students’ understandings of mathematical topics such as measurement and geometry, two areas that are a major focus in VET courses, e.g., civil construction. It could be argued that question 23 in the first example has not been answered correctly because the “whole” ribbon has not been drawn. However what has been drawn, three parts, does represent the whole ribbon. In this case, the student has identified that there must be four parts, with one already shown and the three remaining to be drawn. In the second example, however, it is not clear that the student understood entirely what the question asked. Although they have drawn a part, it is not representative of the part shown in the question. A plausible alternative response could be that the student has understood the question and has shown the parts differently, one drawn as a rectangle and the remaining two as bows. The third example shows that the question has been responded to, that is, “draw the whole ribbon”. In this case, four parts have been drawn representing the whole.

Question 24 asks the students to partition the shapes in quarters three different ways. Partitioning is a process that generates quantity and builds understandings of rational numbers (Lamon 1996; Pothier and Sawada 1983). In the first example, the student has not partitioned the shapes in three different ways rather, they have partitioned the parts using shading in three different ways. Hence, they have shown that they can partition the whole into equal parts different ways. If the assumption was that partitioning referred to “drawing” the equal parts in each shape differently, which the question does not ask for, the student’s response would be incorrect. In this example, the student opted to shade the parts three different ways, thus showing that they have understood the idea of partitioning. In the second example the student has shown quarters in two of the three shapes. Of interest is that they have opted to draw only one quarter and then used shading to represent that quarter. In the third shape the student has divided the shape into quarters across the shape and on the diagonal, however, if we assume that shading is the process used to represent quarters which appears to be the case, the response is shown to be incorrect because more than a quarter of the shape has been shaded. The third example response is straight forward showing the shapes partitioned into quarters three different ways; however, it is interesting that the student did not opt to shade unlike responses one and two.

A plausible argument for the different responses to the above questions could be that the language used in the test may have been a source of problems for students who experience difficulties with reading and also with understanding what the words in the questions were actually asking (Jordan and Levine 2009). This aspect was identified by trainers and teachers who provided feedback on the students’ results and thus reinforces Jordan and Levine’s argument. They argued that some of the variations in students’ number knowledge of number words and symbols appeared to be associated with “differential exposure to the language and symbol systems of mathematics (p. 64)”.

The CDAT was not administered a second time after a sequence of teaching and or training to identify any positive effects from the pedagogical approach adopted in the study. There were several reasons that contributed to this outcome: (1) students were enrolled for a limited number of weeks; (2) student attendance varied at sites; (3) the value of repeating the test after gaining the pre-test results was not forthcoming from teachers and trainers; (4) teaching and training schedules meant that post-testing was a difficulty, (5) forcing teachers and trainers to do the post test was not deemed appropriate by the research team and (6) administering the test after a brief time was not likely to increase students’ knowledge and understandings (see for example, Betts et al. 2011).

The teachers and trainers, however, suggested that providing contextual scenarios with images and diagrams for the students might assist them with understanding what the questions were asking. This aspect has been identified in previous studies. The provision of mathematics assessment items that link with real life situations is likely to make more sense to students. The research team proceeded to write a scenario-based assessment in consultation with teachers and trainers and conducted a limited trial to identify if it was more appropriate and fair for students.

Scenario-based assessment

A limited trial of a scenario-based assessment (pre-assessment) was conduct across four sites. The results of that trial are shown in Table 2. When comparing the results across the four sites, they indicate that students in the RATEP program, Site 6, (n = 13) and Site 3 (n = 3) have fared reasonably well (RATEP = 76.02% and Site 3 = 76.47). There are several reasons to consider here, (1) the student cohort was different to the cort who did the CDAT, (2) RATEP had a higher number of enrolled students who completed the formal years of schooling, (3) Site 3 had three enrolled students, and (4) scaffolded support was provided to Site 3 students by way of reading questions where required. As mentioned previously, more work was needed to bring RATEP students to a level that will support them if they choose to progress to an undergraduate teaching degree in education. Similarly, more work is needed for students at Site 3 to support them in their chosen course.

Table 2 Scenario-based student assessment results

The results for fractions were not strong overall and indicated that this area continued to be a challenge for students particularly for Site 7 (15.64%) and Site 4 (21.89%). Measurement, statistics, trigonometry and reading signs have been shown to be a challenge for students who attempted the assessment.

There were several factors that may have influenced the results, (1) students were required to complete the assessment in 1 h, which may have proved a challenge to students to complete in this time, (2) students did not have the necessary knowledge and understandings to solve the questions asked, and (3) the language of the questions may have been ambiguous or biased.

Whilst the trialled scenario-assessment took into consideration the ideas and feedback from trainers and teachers, the research team observed student behaviours and results of students from the 2010 assessment and identified that more work was needed to refine the assessment. For example, the provision of text scenarios to support students with leading into the tasks was thought to assist students; however, for students who experienced difficulties with reading, the provision of written scenarios proved a challenge. This issue reinforces Bennett’s (2011) discussions about formative assessment and bias. That is, English language proficiency, ethnicity and gender may influence how students performed on the assessment items. Some of the tasks had multiple directions and did not provide lead in tasks that prepared students for the application of the multi-directional tasks. Figure 5 is an example in this instance. An important caveat is necessary here. For students enrolled in construction courses, the use of centimetres is considered unnecessary with most measurement in the construction industry using metres and millimetres, hence, responses indicated millimetres and metres.

Fig. 5
figure 5

Example 1—application scenario

In the first example—application scenario five, the student has shown the height, length and width in millimetres by adding each of the given measurements.

In this example, the student demonstrated their capacity to achieve the multi-stepped task using the measurements provided. They have also been able to keep focused on the purpose of the original question, that is, “work out the surface area that is to be painted using the measurements on the plan”. The student has demonstrated that they can convert measures, work out the whole area and provide the surface area size. The student has not calculated in the size of the doorway to find the total area, an oversight maybe, and although they do not come to a “right answer”, they have demonstrated their capacity to work through such a multi-stepped problem using addition, multiplication and subtraction.

In the second example (Fig. 6), the student’s response is interesting because of the range of strategies used to work through the problem. As a beginning point, the student has demonstrated knowledge of: area as being related to a surface (the matrices), that windows and doorways are spaces, but spaces that are not included in the overall calculation of the area, that all walls have to be included in the total, additive tasks (calculations of the length and width of the walls). What student has not demonstrated is a strong understanding of area and how to use the given measurements to calculate the area using multiplicative strategies, the external area of the walls. Further, if we use the student’s method for calculating the area, they have not included both smaller walls—only one (8 m 2 is included) and they have not shown an understanding of adding decimals, that is, 8 m 2 + 3.4 m 2 = 42 m 2.

Fig. 6
figure 6

Example 2—application scenario

From the two examples provided, it is evident that the scenario task is a complex one with multiple steps. Such steps may prove very difficult for students who struggle with applying appropriate strategies to solve tasks, whereas other students may do very well with such tasks. What is critical is that when trainers and teacher administer such tasks to students they be cognisant of the students’ capacity to work through the tasks without causing substantial anxiety thus reinforcing what students may already know about themselves. Although teachers and trainer may have good intentions with encouraging students, unintentional bias may disadvantage students; for students who experience difficulties with reading text and applying the signs and symbols to represent their responses, such a task may be just too difficult. There is the risk of students disengaging, giving up and walking away from something that they may enjoy but struggle with because of the mathematical concepts, language and signs and symbols associated with the tasks.

Through discussions with one TAFE Institute Director and one trainer about their students’ results on the scenario-based assessment, the opportunity for trialling simulated assessment was presented. The next section elaborates this strategy. It provides an analysis of video data where a simulated task was provided for students to work through.

Simulated assessment

Simulated assessment enabled the students (N = 5) to explore, rehearse, and consolidate areas such as construction as shown in Fig. 2 as a way of creating meaningful practical knowledge in the context in which it was acquired—the construction workshop at the TAFE Institute. Simulated experiences allowed for the development of core competencies, here, understandings of length, perimeter and area (surface area) and volume, as well as a strategy for practice-based learning that attempts to mirror, anticipate real world situations through guided experiences in an interactive way (Rush et al. 2010). This process allowed for the trainer and students with the opportunity to manipulate, repeat or refine a construction task. The simulated task from the Certificate 11 in Civil Construction course provided at one regional TAFE required students to work out the length, perimeter and area in the simulated house model shown in Fig. 3. The scenario was that the area of the floor needed to be covered. The task was a pre-assessment item (Fig. 7).

Fig. 7
figure 7

Simulated assessment example

Adopting a collaborative approach meant that the students worked as a team to make decisions about measuring as part of construction and the trainer providing feedback to the students. Further, the trainer asked students to provide evidence for their decisions for construction and measuring and asked questions to enable students to account for their decisions. The students remained focused and completed the task as instructed.

Initially, the trainer adopted an explicit instructional approach as he needed to explain to the students what was to occur in the lesson. Following this, the instructional approach became student-centred whilst using a kinaesthetic approach. The students were responsible for how they performed during the task; however, the trainer was within close proximity and scaffolded and prompted the students when and where necessary.

The impact on the students’ mathematical understanding of length, perimeter and area was deemed by the trainer as satisfactory. The video-based evidence indicated that the students were able to understand the practicality of the task, choose the appropriate measuring instrument, and apply a technique for measuring, measure the length of the rooms and record measurements. However, when required to do the mathematical calculations, they were not able to do so, that is, they could not multiply large numbers to find the area using a pencil and paper. Calculators were then introduced to support the students, with the trainer modelling how to find the area using a calculator. After several attempts at using the calculators the students’ confidence increased; however, the decimal point then became an issue with students not fully understanding the enormity of the number (using millimetres) multiplied and that it was in square millimetres.

Discussion and conclusion

The project from which this paper draws has been positioned to strengthen the evidence pertaining to the mathematics assessment approaches used by trainers to assess Indigenous students enrolled in VET courses. For the past 30–40 years, the assessment of competencies has received increased attention because national priorities for continuing education and training. The changing nature of workplace requirements, an ageing workforce and lengthening working lives have given rise to this attention. Studies have demonstrated that some assessment approaches may not be reflecting this change with trainers drawing on their own assessment strategies, reducing the comparability and accuracy of assessments.

Sadler (2007) argues that assessment approaches should be directed towards gathering evidence that allows for inferences about capability can be made. Whilst Halliday-Wynes and Misko (2012) emphasise that a holistic approach to assessing competency would provide a more comprehensive view of students’ capabilities. As part of this approach, three assessment approaches were identified, Cognitive diagnostic assessment tasks, Scenario-based assessment and simulated assessment. Highlighted in what follows are several significant issues that related to the assessment approaches identified and used in the study.

Starkly apparent in the results presented in Table 1 were the difficulties that students had with place value. A number of students may have found this question challenging because of the way that the place and value of numbers was shown. For example, “write the missing numbers: 54 = 4 tens _____ ones; 38 = ____ tens 18 ones. It was reported from trainers and research team members who administered the CDAT that students had indicated that there were errors with how the place and value of numbers were written. This could be the outcome from a range contexts including the way that place value may be been previously taught and learned. The question requires students to be flexible in their thinking about the place and value of a number. If this flexibility is limited then the reasons for why students asked about errors written into the questions could be explained. In the VET courses that were focused on in the study fractions were frequently used, for example, in civil construction. Fractions were also identified as a source of problems for students in the CDAT. However, whilst this may be the case, the language of the questions and associated interpretation of the image may be problematic. For students who may have difficulties with reading and comprehension, understanding what the questions are asking may present challenges. Trainers reinforced this stating that the variations in results may be associated with the amount of exposure to the language and symbols of mathematics. Perhaps one way to address this issue is to provide students with scenarios or materials whereby they can represent their thinking in a range of ways initially rather than through language and symbols which do increase in complexity in mathematics.

Although the scenario-based assessment results were limited and there was no intention in the project to compare the results across assessment approaches, the results showed that there were significant challenges for the students using this approach. Language proved to be a critical factor here. In responding to the feedback from trainers and teachers, it was thought that providing text to support the questions would assist students with understanding what the questions were asking. This had an unintentional consequence and highlighted the bias in the texts (see Bennett 2011 for elaboration of bias). Students who experienced challenges with reading and comprehension found the text scenarios challenging to read and understand, thus limiting their opportunities to demonstrate their competencies. This was exacerbated as the tasks increased in complexity with a series of directions to work through to come to a final response. The surface area task in Fig. 5 demonstrates the series of directions. As a consequence of providing feedback to trainers on this approach to assessment, it was agreed that a simulated task might be more suitable.

The simulated task asked that students find out the surface area of the scaled model of a three bedroom home. Working as a team and with all members participating in the process, students had to calculate individually, the rooms (walls for painting and floors for carpet). The trainer was identified as providing support where necessary and if students asked but generally the students were identified as working through the task. Sadler (2007) raised the issue that assessment of learning should be directed towards gathering evidence for drawing inferences about students’ capabilities. He argued that three conditions should be met, students must be able to do, on demand, something they could not do before and with no scaffolded conditions. In the case of this task, it was formative and identified as a scaffolded task because students were had been learning about how to calculate surface area. The task allowed them to trial what they had learned.

This paper has highlighted three different assessment approaches used in VET courses to assessment students’ competencies and like most research there were limitations. For example, the pre- and post-CDAT assessment could not be conducted because the courses students were enrolled in were limited to intensive such as 1 or 2 weeks. Administering the CDAT after a brief time was not likely to yield improved knowledge and understandings (see for example, Betts et al. 2011). Whilst the scenario-based assessment does have potential for learners who are strong readers, writers of this approach need to be cognisant of learners’ reading and comprehension capabilities and ensure that they too are able to access what is being asked of them. The simulated assessment also showed potential with students more fully demonstrating what was being asked, however, it was limiting because students were working over one another to access the different sections of the model for measuring. The use of a life-size building, for example, the classroom have been more useful.

In summary, the CDAT showed some advantages for assessing students as it fostered awareness and conversations with teachers and trainers about students’ mathematical knowledge and how they might address the gaps in their learning. The CDAT also engaged the teachers and trainers in conversations about fair assessment and how to find out about students’ understandings. The analysis of the assessment approaches and their relationship to mathematical learning and training courses proved to be effective for the teachers and trainers as well as the students. This was evident during the site visits and feedback meetings which for some teachers and trainers were repeated over the 3 years of the trials. But the best evidence of the effectiveness of the assessment trials appeared in the 4 years that several of the teachers and trainers sustained their commitment to improving student outcomes and their future opportunities for employment in rural and remote Queensland. For the trainer who engaged and participated in all three assessment approaches, the significance of using a range of strategies to address bias and culture fair assessment was evident. His focus on understanding where students came from and the point at which they arrived to learn and build strong trainer–student relationships was highly applicable in the VET context.

References

  • Anderson GL (2017) Can participatory action research (PAR) democratize research, knowledge, and schooling? Experiences from the global South and North. Int J Qual Stud Educ 30(5):427–431

    Article  Google Scholar 

  • Australian Government (2012) Annual National Report of the Vocational Education and Training System. http://research.acer.edu.au/cgi/viewcontent.cgi?article=1016&context=transitions_misc

  • Ball DL, RAND Mathematics Study Panel (2003) Mathematical proficiency for all students: toward a strategic research and development program in mathematics education. RAND, Santa Monica, CA

  • Baturo A (2008) Developing mathematics understanding through cognitive diagnostic assessment tasks. Department of Education, Employment and Workplace Relations, Canberra

    Google Scholar 

  • Battista MT (1999) Fifth Graders’ enumeration of cubes in 3D arrays: conceptual progress in an inquiry-based classroom. J Res Math Educ 30(4):417–448

  • Bennett RE (2011) Formative assessment: a critical review. Assess Educ Princ Policy Pract 18(1):5–25. doi:10.1080/0969594x.2010.513678

    Google Scholar 

  • Betts JR, Hahn Y, Zau AC (2011) Does diagnostic math testing improve student learning?. http://gateway.library.qut.edu.au/login? http://search.ebscohost.com/login.aspx?direct=true&db=eric&AN=ED525099&site=ehost-live

  • Billett S, Choy S, Dymock D, Smith R, Henderson A, Tyler M, Kelly A (2015) Towards more effective continuing education and training for Australian workers. National Centre for Vocational Education Research (NCVER), Adelaide

    Google Scholar 

  • Clayton B, Blom K, Meyers D, Bateman A (2003) Assessing and certifying generic skills: what is happening in vocational education and training?. National Centre for Vocational Education Research, Adelaide

  • Craddock D, Mathias H (2009) Assessment options in higher education. Assess Eval High Educ 34(2):127–140. doi:10.1080/02602930801956026

    Article  Google Scholar 

  • Crawford H, Biddle N (2017) Vocational education participation and attainment among aboriginal and torres strait Islander Australians: trends 2002–2015 and employment outcomes. http://caepr.anu.edu.au/Publications/WP/2017WP114.php

  • Department of Education and Training (2009) RIICCM201A carry out measurements and calculations. Retrieved at https://training.gov.au/Training/Details/RIICCM201A. Accessed 20 Dec 2012

  • Goldin GA (2000) A scientific perspective on structured, task-based interviews in mathematics education research. In: Kelly AE, Lesh R (eds) Handbook of research design in mathematics and science education. Lawrence Erlbaum, Mahwah, NJ, pp 517–546

  • Griffin T (2014) Disadvantaged learners and VET to higher education transitions. Occasional paper. National Centre for Vocational Education Research (NCVER), Adelaide

    Google Scholar 

  • Gulikers JTM, Kester L, Kirschner PA, Bastiaens ThJ (2008) The effect of practical experience on perceptions of assessment authenticity, study approach, and learning outcome. Learn Instr 18:172–186

    Article  Google Scholar 

  • Halliday-Wynes S, Misko J (2012) Assessment issues in VET: minimising the level of risk. National Centre for Vocational Education Research, Adelaide

    Google Scholar 

  • Hodge S (2014) Interpreting competencies in Australian vocational education and training: practices and issues. National Centre for Vocational Education Research, Adelaide

    Google Scholar 

  • Jordan NC, Levine SC (2009) Socioeconomic variation, number competence, and mathematics learning difficulties in young children. Dev Disabil Res Rev 15(1):60–68. doi:10.1002/ddrr.46

    Article  Google Scholar 

  • Karmel T (2012) Assessment issues in VET: minimising the level of risk. https://www.ncver.edu.au/__data/assets/file/0024/8682/assessment-issues-in-vet-2620.pdf

  • Kemmis S, McTaggart R, Nixon R (2013) The action research planner: doing critical participatory action research. Springer Science & Business Media, Berlin

    Google Scholar 

  • Kennedy V, Cram F (2010) Ethics of researching with whānau collectives. MAI Rev 3:1–8

  • Lamon S (1996) The development of unitizing: its role in children’s partitioning strategies. J Res Math Educ 27(2):170–193

    Article  Google Scholar 

  • Lamon SJ (2012) Teaching fractions and ratios for understanding: essential content knowledge and instructional strategies for teachers. Taylor and Francis, Hoboken

    Google Scholar 

  • Lesh R, Kelly AE (2000) Multitiered teaching experiments. In: Kelly AE, Lesh R (eds) Handbook of research design in mathematics and science education. Lawrence Erlbaum, Mahwah, NJ, pp 197–230

  • Lozenski BD (2014) Developing a critical eye (i), chasing a critical we: intersections of participatory action research, crisis, and the education of black youth. University of Minnesota

  • Moeller K, Pixner S, Zuber J, Kaufmann L, Nuerk HC (2011) Early place-value understanding as a precursor for later arithmetic performance—a longitudinal study on numerical development. Res Dev Disabil 32(5):1837–1851. doi:10.1016/j.ridd.2011.03.012

    Article  Google Scholar 

  • OECD (2012) OECD reviews of vocational education and training. Learning for jobs: pointers for policy development. https://www.oecd.org/edu/skills-beyond-school/LearningForJobsPointersfor%20PolicyDevelopment.pdf

  • Parsons S, Bynner J (2005) Does numeracy matter any more?. National Research and Development Centre for Adult Literacy and Numeracy, London

    Google Scholar 

  • Pothier Y, Sawada D (1983) Partitioning: the emergence of rational number ideas in young children. J Res Math Educ 14(5):307–317

    Article  Google Scholar 

  • Rush S, Acton L, Tolley K, Marks-Maran D, Burke L (2010) Using simulation in a vocational programme: does the method support the theory? J Vocat Educ Train 62(4):467–479

    Article  Google Scholar 

  • Sadler DR (2007) Perils in the meticulous specification of goals and assessment criteria. Assess Educ 14(3):387–392. doi:10.1080/09695940701592097

    Article  Google Scholar 

  • Siemon D, Enilane F, McCarthy J (2004) Supporting indigenous students’ achievement in numeracy. Aust Prim Math Classr 9(4):50–53. http://www.aamt.edu.au/Professional-learning/Journals/Journals-Index/Australian-Primary-Mathematics-Classroom2/APMC-9-4-50

  • Smith LT (2012) Decolonizing methodologies: research and indigenous peoples, 2nd edn. Zed Books, London

    Google Scholar 

  • Taras M (2002) Using assessment for learning and learning from assessment. Assess Eval High Educ 27(6):501–510. doi:10.1080/0260293022000020273

    Article  Google Scholar 

Download references

Acknowledgements

The author acknowledges research staff and assistants, participants, schools, TAFEs and communities where this project was conducted.

Competing interests

The author declare that she has no competing interests.

Availability of data and materials

Data and material is stored electronically on a password- protected QUT Rstore Network Drive and is not currently publicly available.

Consent for publication

All participants were involved in discussions about the project before consenting to participate in the project using the approved NEAF consent forms. They were informed that there will be dissemination to wider audiences and reporting back to communities and organisations and schools involved. This was indicated on the approved consent form as was detail about confidentiality and protecting their identities.

Ethics approval and consent to participate

Ethics approval to conduct this study was granted by the Queensland University of Technology Human Research Ethics Committee (Approval Number 0900000345) and the NHMRC Registered Committee (Number EC00171). A comprehensive National Ethics Application Form (NEAF) (now referred to as Human Research Ethics Application) was submitted and approved for this project and the above approval number allocated. The writing of the ethics and conduct of the research was informed by important sources pertaining to research with Aboriginal and Torres Strait Islander People, communities and schools and included The Australian Institute of Aboriginal and Torres Strait Islander Studies Guidelines for Ethical Research in Australian Indigenous Studies (AIATSIS 2012) which identified in the six overarching principles that “at every stage, research with and about Indigenous peoples must be founded on a process of meaningful engagement and reciprocity between the research and Indigenous peoples (p. 3).” Further the guiding principles of Kennedy and Cram (2010) which focus on self-determination, clear benefits, acknowledgement and awareness, cultural integrity and capacity building were drawn on together with the critical work of Smith (2012) based on reciprocity and empowerment of Indigenous Peoples.

Funding

Funding for this project was from the Australian Research Council Linkage Grants Scheme (LP0989663) titled Skilling Indigenous Skilling Indigenous Australia: Effective learning of numeracy for employment by regional and remote Indigenous students in vocational education and training courses. The team consisted of Professor Tom Cooper, Dr. Bronwyn Ewing, Dr. Christopher Matthews.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bronwyn Ewing.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ewing, B. An exploration of assessment approaches in a vocational and education training courses in Australia. Empirical Res Voc Ed Train 9, 14 (2017). https://doi.org/10.1186/s40461-017-0058-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40461-017-0058-z

Keywords