About

Accellier is the provider of choice for thousands of people and hundreds of organisations in Australia and around the world. Under our former name SAVE Training, we built a solid foundation on which Accellier now stands, embodying almost 10 years of service to Australia’s Tertiary and Vocational Education Sector. As a testament to this, since our inception in 2010 we have spent only a few thousand dollars on advertising. Our clients are almost entirely referred from our happy graduates and business customers.

Accellier is the trading name of SAVE Training Pty Ltd and is a Registered Training Organisation (RTO 32395) that offers a range of nationally recognised courses in education and business Australia wide through our online and face to face courses.

Our mission is to enhance people’s value through excellence in service and learning outcomes.

In response to a recent professional development session we ran, I was asked a couple of very thought provoking questions. I thought it might be helpful to share my thoughts as together, as a sector of educators in Vocational Education and Training (VET) we are often grappling with the interpretation of terminology.

How specific do observation criteria need to be?

The first question was “where do I draw the line between creating observable criteria and not giving out the answer to students? How specific do the observation criteria need to be?”

It’s a great question.

Firstly, I think it is contextual – i.e. the proverbia “it depends.”

Considering things like risk, AQF level, the nature of the unit of competency itself, the tasks being performed, the industry and other factors will all have an impact on how the assessment is designed and administered.

This does however, highlight the tensions sometimes caused in the interplay between the principles of assessment and rules of evidence. Sometimes our pursuit of the principle of reliability is at the expense of some validity. Too much in the way of reliability can cost us flexibility, and so on.

In my opinion, I do not believe that observable criteria which specifies a standard of performance, no matter how specific, could be considered “giving the answer to the student.”

After all, the student is the one who must actually perform the task. The old swimming example illustrates this well; you could have a very specific list of criteria describing the qualities and characteristics of an effective swimmer; giving that to the student prior to a swimming assessment, will not compromise the integrity and reliability of the assessment of them diving in and swimming a few laps.

I think it is important, however, to ensure the assessor establishes the assessment in a way that is a reliable and valid assessment of skills. Something that either replicates or is the actual workplace.

If you imagine the student performing the task as a competent graduate on the job, you can ensure the assessment environment is established to reflect that. Then you can consider the instructions you set up for the assessor and student to follow.

Typically assessments are conducted in a way that the candidate performs unaided.

I would think that a student performing the assessment task while repeatedly being prompted by the assessor, or permitted to check the criteria during the assessment, would not be an accurate representation of how the student can perform that task in the real world. However there would, in most cases, be no issues with the student inspecting the criteria in the lead up to the assessment event.

It’s important to distinguish, too, between criteria used to judge performance or product quality, and marking guides for knowledge questions.

Obviously, giving students the answers to the questions to refer to during an assessment is not going to be an accurate test of their knowledge. Though indeed it is very common for teachers to provide, as part of teaching and learning activities, access to an abundance of detailed information and examples that the student can refer to. But come assessment time, the student needs to be ready to retrieve and use that knowledge unaided, in a way that reflects their abilities to do the thing in the real world.

So in summary, I do not think there is a practical limit to how specific criteria can be, nor do I think more specific criteria will compromise the integrity of the assessment. I believe more specific criteria will help make the assessment more reliable, and less open for interpretation by assessors. It will also improve fairness for the candidate, as they can be clear about the expected performance in their assessment.

Is evidence criteria the same as benchmark (or model) answers?

This question was asked was in reference to the ASQA guidance on the Standards for RTOs.

Evidence criteria could include:

Here, The Australian Skills Quality Authority (ASQA) talk about “application of knowledge” which in my experience is seeing the student use their understanding in order to do the thing. For example, they have knowledge of the correct tool to use, because they selected the correct tool each time during the observation.

They reference model answers, which implies some form of questioning.

Questioning can be used as part of an observation:

“Why did you use a file to shape the timber?”

“What filler did you use for the holes in this timber post?”

Model answers can and often do include evidence criteria. A lot of our marking guides to knowledge questions are written in this format:

Student’s answer should meet the following criteria or include these points:

  • X
  • Y
  • Z

An example of an answer could be “X and Y because of Z.”

So benchmark answers may include evidence criteria. This is especially true if a task or observation includes the assessor asking the student questions. In this case, within the criteria, will necessarily be model answers of some kind.