1 July 2013

To map or not to map
Operating across a number of States and Territories in Australia, I get to see firsthand the inconsistency in the way RTO standards are interpreted and applied. I understand this is one of ASQAs major battles and one that may be resolved through ongoing communication strategy and auditor moderation.

In previous VET Gazettes, I have included a short section titled ‘Auditor’s Tip’ with a snippet about what seems to be the audit focus of the month. I think it is also worthwhile now adding this section regularly highlighting what I am seeing across the country as inconsistent application of the standards. I know lots of people read the VET Gazette so hopefully we can become part of that communication strategy because this following example is quite a common one and easy to fix.

When a person develops an assessment or a suite of assessments, is it a requirement to then develop a mapping document which aligns their assessment with the unit of competency? The simple answer is NO.

It has never been a requirement in either the AQTF or NVR standards that this is done, but I suggest the idea may come from unit TAEASS401B (or any of its predecessors) which may explain why some auditors are imposing the condition. In the TAEASS401B unit one of the performance criteria states ‘Map assessment instruments against unit or course requirements’.

I am not suggesting for one minute that there’s no value in developing one but when the time taken to develop (and continuously improve) the mapping tool is more than the time taken to write the assessment, you have to question whether it’s worth it. If there are multiple assessors who regularly sit down and validate assessments, it can be a handy tool. However if there’s only one assessor who writes their own assessment materials and they have experience in doing so, the development of the mapping tool is arguably not good use of time. The development of a mapping tool becomes more difficult when assessments are integrated and, in my experience, when the assessments are well written at higher AQF levels.

Aligning the assessment to the unit of competency, whether stand alone or holistic, as a measure of its validity for the purposes of an audit is the job of the auditor. It is not the job of the RTO to present a mapping document.

Check the auditor’s tip in this newsletter for more information.

The nature of non compliance
There’s little doubt that since the regulation of RTOs has changed under a Federal model, the requirements being placed on RTOs has changed significantly (for some). Whilst the shift in an audit’s focus cannot necessarily be put down to changing standards, the way in which those standards are interpreted and applied seems to have undertaken a transformation. The idiom, ‘the devil is the detail’, expresses the idea that whatever one does should be done thoroughly and perhaps in times past RTOs may not have been as thorough as they might have been particularly in making sure that their written procedures actually reflect their practice. This common mismatch is a pitfall of buying an ‘off the shelf system’ or perhaps not continuously improving the policy and procedure manual until a week before the audit is called.

As I get older and wiser and lose more hair, some of that loss I put down to scratching my head a lot more than I used to in trying to work out the difference between thorough and pedantic outcomes of audits. What is becoming more evident to me is that RTOs are having a greater number of non-compliances, albeit each one is often minor. What results is that the collection of these often minor non-compliances looks enormous at the end. For example, a minor issue such as an ‘A ‘coded unit in a training and assessment strategy that should be a ‘B’ coded unit, the report shows up as non compliant against SNR 15 because the summary at the start of the audit report only reports at SNR standard level not at the element level.

I am also seeing audit outcomes that are called non-compliances when, in my opinion, they are not issues of compliance. One thing that our NVR standards do not do which many other similar quality management systems frameworks do, is give the auditor the option of a suggested improvement or, in ISO terms, an ‘area of concern’ or an ‘opportunity for improvement’. These are not non compliances but the area of concern particularly, puts the RTO on notice that it will be checked next time. In not having these options, the current audit model seems to take away the auditor’s discretion on whether an identified anomaly actually breaches anything in the standards which we know are, at best, very subjective. The result is if it looks wrong, it is not compliant which of course is not always the case.

As I collect more audit reports from clients I have the opportunity to see the trends in what is being focussed on during audits and, as always, I agree with some but not with others.

If you have an audit outcome that you are not sure about or simply want to share, just send me an email because, like many, I would like to understand the trends and inform people about what they can do to stay compliant.

Quality – do we all agree?
For years now we have been reading and hearing about quality in the VET sector and each of us has an idea in our head of what that means and how RTOs achieve it. Without a dedicated course in how to integrate quality management principles into a training business, we have all had to learn along the way. Quality is a multifaceted and very complex matter and those that have learned along the way often have a very different idea as to what it means. Throw into the mix an ever evolving regulatory model which, having such a significant impact on how those businesses operate, seems to change the whole dynamic of quality in VET each time the regulatory standards are reviewed. Our regulatory cycle of continuous change sees us moving from prescription, to outcome focused, to prescription through regulation and legislation to……

At the time of writing this article, I have just read the NSSC paper ‘Improving vocational education and training – the case for a new system’ which explains why changes to regulatory standards are justified. Given that there have been no less than five changes in the last eleven years (this will be the sixth since 2001), one may conclude that regulating RTOs to achieve quality outcomes hasn’t worked all that well to date. Whilst I personally have my own theory, you have to question if these changes will work this time.

I, like many, have numerous questions about quality in RTOs but one I wish to pose is this:

If we accept that quality cannot be regulated into the sector and that we simply regulate poor quality out of the sector, do we all agree on what poor quality is?

At the moment the regulatory model seems to be at a stage of tightening the application of the rules (the rules haven’t changed all that much but interpretation has) with the result that there appears to be fewer RTOs playing in the VET landscape through either regulatory action or simply choosing not to continue. Back to my question and I want to make this point very clear.

The philosophical concept of causality tells us that having fewer RTOs in the system does not mean that those who remain will get any better. We can regulate poor quality out but the result is NOT that what is left gets better. In fact there may even be the opposite result if those who stay in the system have to service a broader market and their resources become stretched etc. So by getting rid of poor quality, we cannot assume that a corresponding increase in quality will ensue.

We (as a sector) should know more than anyone that education is the answer. So what is being done in an attempt to raise awareness of what quality is, what it means and how to achieve it in an RTO? Professional development (by definition) is supposed to be a progressive activity that extends (develops) practice. With the monotonous ‘How to pass an audit’ PD session being the biggest seller, are we developing anyone by giving them survival tips? Rather than the survival boot camp, should we not be talking more about sustainable business?

This article is not about my own position on what the role of the regulator is or even should be but I want to start with ASQA’s vision statement which reads:

“ASQA’s vision is that students, employers and governments have full confidence in the quality of vocational education and training outcomes delivered by Australian registered training organisations”.

The statement itself challenges us to consider several recipients of VET sector outcomes and what quality means to students, employers and governments. So I now have four separate questions.

1. What does quality mean to a student?
2. What does quality mean to employers?
3. What does quality mean to governments?
4. Is each of these stakeholders expecting something different?

We read too often that there are RTOs in the business to educate and some in the business to make money with the education aspect coming second in their priorities. I am not suggesting that a business has to survive without making money but in the discussion about quality the priority becomes the focus. Quality can be seen from a number of perspectives and the many definitions one reads often raise questions about which definition dominates everyday VET practice. In reality what the various stakeholders need and require, in respect to quality, can be very different things. Before tackling any discussion about what the various stakeholders need or want, it pays to look at some commonly held views regarding the definition of quality.

The transcendent approach

Goes way back to the days of Plato and Aristotle and is probably the most used and understood definition although it is somewhat subjective. This meaning is about the superiority or excellence in the product or service. Although many of us in the VET sector agree this is what we want, we have trouble defining (and agreeing on) what excellence and superiority means. This definition is considered the philosopher’s view because quality is both absolute and universally recognisable. Where this becomes difficult is that it can only (arguably) be determined when the outcome is compared to something. In everyday VET it is not common that a person does the same course twice, so can the student determine quality? If an employer uses two RTOs to deliver courses to its staff, it may have a chance of comparing the two and determining the superiority of one over the other. The problem there of course becomes a matter of who does the employer tell when they find this out? I would suggest that the quality indicators for employers do not achieve this so the data which ‘indicates quality’ needs to be gathered elsewhere.

Manufacturing – based definition
Is more aimed at ensuring a standard or specification is met and is based on process efficiency. This approach is the commonly read iteration when one studies the post WW2 Japanese experiences with key figures like Deming, Juran and Crosby. To figure out if things ‘work’ and subsequently have quality outcomes, this model relies very much on controlling inputs, using statistics and objective quantitative measurement. One of the major problems with using this in VET is that the main focus in this model is cost reduction and at best our data is largely qualitative. Submitting figures such as completion rates is not necessarily data on which we can accurately judge quality.

The regulatory standards in VET lack the level of prescription and objectivity so we can’t really use that definition to great effect because there’s little that is clearly measurable. It is however a commercial reality, that to be competitive one must make the product or provide the service cheaper than the competitor and/or deliver it in a shorter timeframe.

Product – based definition
The one that many marketing teams or economists may use, is more difficult to apply to VET outcomes as our ‘product’ is likely to be a combination of what it is we teach, who teaches it, how it is taught etc. As we only have a minimum standard (and in my opinion, a low one) for the ’who’ part of the equation, if we want a product definition to be useful we would somehow have to benchmark or otherwise quantify the delivery product, which in the absence of any set curriculum, isn’t going to happen soon. As a large part of the regulatory approach is about making sure we assessed something, the focus is on the output (not outcome) rather than the input (product). One of the issues in this approach is that higher quality means higher cost.

The two remaining definitions are those most likely encountered in today’s VET sector because more often than not, we can only measure outcomes based on some very fundamental and shallow evaluation tools we call quality indicators. If we were to embrace a broader and more comprehensive model like the 4 stage Kirkpatrick model, I am sure the sector would gain a much better picture of the real value of training, or its overall quality. Alas, we are not there yet.

These two more prominent definitions are the consumer- based definition and the value – based definition.

The consumer based definition is ‘to meet or exceed customers’ expectations’. Measuring quality based on the expectation of the customer, especially in the VET sector, can lead to false assumptions as there is an increasing amount of purchasers of VET services who are simply seeking credentials rather than the skills required to perform a particular function to an acceptable industry standard. The client’s expectation is that they can complete a course over a short duration because they need a piece of paper to get a job or to progress in a job. The credential is the expectation. Those RTOs who can maximise the issuing of credentials are meeting customer needs and therefore, by one definition, providing a quality product. I am sure Plato and Aristotle would not agree.

If however, we considered another consumer, that being the ‘consumer’ of VET graduates (i.e. a range of industries), then I am sure the ‘fast track to paper’ pathway would not address their needs which is to have competent people not so much credentialised people. I am not aware of any systematic approach or structured evaluation process, taken by any organisation that measures how VET outcomes address industry needs.

The last alternate definition of quality is the value based view. This may be a monetary value or it may be another less immediately tangible value. A person who requires the qualification simply to get a job or progress in an existing one is likely to seek the least expensive, shortest course as the value is in taking the cheapest and quickest route. A person who wishes to gain new skills may choose a longer duration course which has more content as the value they perceive is in gaining knowledge and skills they can take on their lifelong journey. So while each VET stakeholder may have a different idea of what constitutes quality, how on earth can we agree on how we have to regulate poor quality?

Auditor’s Tip
The auditor’s tip for this edition appears on the surface to be fairly straight forward but, as always, will be difficult to apply for some. The difficulty will probably be letting go of old habits like questioning “is this a requirement or not?”

Knowing what is required by the AQTF/NVR standards is the $10M question. Just when you think you have got it you hear of someone else who was told something different.

Whether a document such as a policy, procedure or assessment mapping tool is required or not, if you table it at the audit you offer it up as an auditable document. This happened to me recently at an audit when a client tabled an assessment mapping tool which then led to a ‘discussion’ about the validity of the tool. Although not required for the purpose of the audit, the client created a non compliance by tabling the document.

Similarly in another recent audit with a client, some student files were sampled and found to lack adequate detail resulting in a non-compliance. The files were over two years old and not subject to any funding arrangements so technically, they had been retained for a period longer than the ASQA guideline suggests. However as they were tabled, they were open for criticism.

The moral of this story and my auditor tip is this:

Be prepared for the audit but remember that sometimes less is best. Wait to be asked to provide evidence of something rather than getting out those lever arch files full of paperwork. If what you are being asked for is something you are unsure about, you have the right to politely request more information particularly the details about which standard, guideline or document makes it a requirement.