How the rise of AI skills reshapes what RTOs need to deliver, covering training product updates, unit of competency implications, and ASQA expectations around AI-integrated delivery
Introduction: AI Is No Longer Optional for RTOs
LinkedIn’s Skills on the Rise 2026 report identified prompt engineering and AI model training among Australia’s fastest-growing professional skills, alongside communication, stakeholder collaboration, and governance capabilities. For the vocational education and training sector, this is not a passing headline. It is a regulatory, pedagogical, and strategic signal that RTOs can no longer afford to treat artificial intelligence as a niche ICT elective or an afterthought in training and assessment design.
With the Standards for RTOs 2025 now in force since 1 July 2025, ASQA’s regulatory framework has shifted decisively toward outcomes-based quality, self-assurance, and continuous improvement. At the same time, the ICT Training Package is being redesigned by the Future Skills Organisation to address urgent skills gaps in artificial intelligence and cyber security, with new qualifications, units of competency, and skill sets expected to deliver flexible, industry-aligned pathways into AI-enabled roles.
Drawing on extensive experience working with registered training organisations across Australia, I have observed that the RTOs best positioned for 2026 and beyond are those that are proactively embedding AI literacy into their training products, assessment strategies, governance frameworks, and trainer professional development, rather than waiting for regulatory direction after the fact. This article provides a comprehensive, fact-based analysis of what AI literacy means for RTOs in 2026, structured around five key areas: the national policy context, training package reform, unit of competency and assessment design, ASQA expectations under the new Standards, and practical actions RTOs should take now.
1. The National Policy Context: Why AI Literacy Matters Now
1.1 National Skills Plan 2025–26 and Digital Transformation
The Australian Government’s National Skills Plan 2025–26 Update explicitly links VET quality to digital transformation, data systems, and cyber capability. The Plan signals that AI and data skills will be key levers in workforce planning and funding priorities across the economy. For RTOs, this means that digital and AI literacy are no longer confined to information and communications technology qualifications; they are cross-sectoral competencies that affect business, health, community services, construction, manufacturing, and every other industry area delivered through the national training system.
The policy direction is clear: Australia’s skills system must produce graduates who can work effectively with AI tools, understand their limitations, and apply human judgement in AI-augmented workplaces. RTOs that fail to reflect this in their training products risk delivering qualifications that are misaligned with both industry expectations and government priorities.
1.2 The Digital Education Council AI Literacy Framework
The Digital Education Council’s AI Literacy Framework provides a structured model for understanding what AI literacy means in educational contexts. The framework identifies several foundational and specialised competency domains, including understanding AI and data, ethical AI use, human-AI collaboration, and domain-specific applications. While developed primarily for higher education, the framework is directly adaptable to vocational contexts and industry-specific qualifications.
For RTOs, the framework offers a useful lens for defining what “AI literate” means for learners at different qualification levels. A Certificate III graduate working in allied health, for example, needs different AI capabilities than a Diploma graduate in project management or a Certificate IV learner in training and assessment. The common thread is that all graduates should understand what AI can and cannot do, how to use AI tools ethically and effectively in their workplace, and how to critically evaluate AI-generated outputs.
|
Key Takeaway AI literacy is now a foundational expectation across VET, not a niche ICT add-on. National policy, training package reform, and educational frameworks all converge on the same message: RTOs must treat AI capability as a cross-sectoral requirement that influences training design, assessment, governance, and learner support from 2026 onward. |
2. Training Package Reform and AI-Related Content
2.1 The ICT Training Package Redesign
The Future Skills Organisation is currently redesigning the ICT Training Package to address critical industry skills gaps identified through the ICT Needs and Gap Analysis completed in 2025. That analysis found that the existing ICT Training Package does not adequately meet industry skill needs, particularly in areas such as artificial intelligence and cyber security, and called for shorter, more flexible training options with a common language to describe digital skills across the economy.
The redesigned ICT Training Package will deliver a new suite of qualifications, units of competency, and skill sets that clearly separate generalist digital skills from specialist ICT streams. Importantly, the update includes new and updated qualifications aligned to emerging areas, including artificial intelligence and cybersecurity, with clear job role pathways and stackable credential options. The new AI-aligned units are being designed to be flexible and contextualisable across different delivery environments, which opens the door for RTOs delivering non-ICT qualifications to import AI-focused units into business, health, construction, and other training products.
2.2 The Qualification-First Model and AI-Infused Outcomes
Qualifications reform is shifting VET design from a unit-by-unit approach to a qualification-first model, with stronger emphasis on holistic outcomes and clearer pathways across VET and higher education. For AI and digital careers, this means that AI literacy should be designed into the overall qualification outcome, defining what a competent graduate can do with AI in the workplace, and then expressed through clustered units, integrated assessments, and workplace learning, rather than bolted onto a single elective.
For RTOs, this shift has direct practical implications. When reviewing scope, designing training and assessment strategies, and developing learning resources, RTOs should be asking: what does a graduate of this qualification need to be able to do with AI tools in their industry? The answer will vary by sector, but the question itself should be embedded in every training product review from 2026 onward.
2.3 Cross-Package Application: AI Beyond ICT
One of the most significant implications of the ICT Training Package redesign is the potential for AI-related units and skill sets to be imported into non-ICT qualifications. A Certificate IV in Business, for example, could include an AI skill set covering the use of AI tools for data analysis, document generation, and decision support. A Diploma of Nursing could incorporate units on AI-assisted clinical documentation and the ethical boundaries of AI in patient care.
RTOs should be anticipating this cross-package use in their training and assessment strategy design, rather than treating AI as something that belongs exclusively within ICT qualifications. The industry signal is clear: employers in every sector are increasingly expecting graduates to arrive with practical AI capabilities, and RTOs that respond early will have a competitive advantage in both learner attraction and employer satisfaction.
|
Practical Implication for RTOs Training and assessment strategy (TAS) and resource design should anticipate cross-package use of AI-focused units. RTOs delivering business, health, community services, and trades qualifications should be mapping where AI-related skill sets can be imported or contextualised to meet emerging job role expectations in their industries. |
3. Unit of Competency and Assessment Design: Where Prompt Engineering Fits
3.1 From “Using Software” to “Working with AI Systems”
Even before new AI-specific units formally land in training packages, RTOs can and should be interpreting existing digital, problem-solving, and information skills through an AI lens. The Australian Digital Capability Framework (ADCF), now used in RTO readiness work for the 2025 Standards, provides a structured approach to this. The ADCF splits digital capability into five domains: information and data literacy, communication and collaboration, digital content creation, protection and safety, and technical problem-solving, each assessed at foundation through to specialised levels.
Within this framework, prompt engineering can be positioned as the practical “user-side” of AI literacy: how learners, trainers, and employers ask, instruct, and critically evaluate AI outputs in real vocational tasks. This is not about turning every learner into a software developer. It is about ensuring that graduates can use AI tools effectively and responsibly in their specific workplace context, whether that means drafting clinical notes, generating project reports, analysing safety data, or supporting customer communications.
3.2 Embedding Prompt Engineering in Performance Criteria and Evidence Requirements
When designing or updating units of competency and assessment tools, RTOs can embed prompt engineering and AI literacy across several evidence dimensions:
|
Evidence Dimension |
AI and Prompt Engineering Examples |
|
Performance Criteria |
Selects and configures AI tools appropriate to the task; designs prompts or inputs to obtain relevant outputs; tests and refines prompts to improve accuracy and relevance; documents AI use and validates outputs against workplace standards |
|
Knowledge Evidence |
Range and limitations of AI tools used in the industry; risks of bias, hallucinations, privacy and intellectual property breach; organisational policies and procedures governing AI use |
|
Foundation Skills |
Digital literacy in AI-mediated tasks; communication skills for articulating AI requirements; problem-solving and critical thinking when evaluating AI outputs |
|
Assessment Conditions |
Conditions specifying whether AI tools are permitted, restricted, or prohibited for specific assessment tasks; requirements for disclosure and attribution of AI-assisted work |
3.3 Assessment Integrity in an AI-Enabled Environment
The integration of AI into assessment raises significant integrity questions that RTOs must address proactively. ASQA’s Corporate Plan 2025–26 emphasises cracking down on fraudulent practices, including non-genuine assessment evidence and non-authentic student work, and maintaining integrity as a core regulatory priority. Separately, ASQA’s AI Transparency Statement sets out principles for the agency’s own use of AI and signals expectations for transparency, risk management, data protection, and human oversight when providers use AI in their operations.
For RTOs, the practical implications are clear. Every training and assessment strategy must explicitly state whether and how learners may use AI in assessment. This should be granular: AI may be permitted for research and drafting with full attribution in some tasks, restricted to specific supervised activities in others, and entirely prohibited in high-risk competency demonstrations such as clinical skills or safety-critical operations. Assessment tools should include conditions of assessment that specifically mention AI, and validation processes under Clauses 1.3 to 1.5 and 4.4 of the Standards for RTOs 2025 must consider AI-related risks, including the similarity of AI-generated responses and the challenge of authenticating learner work.
RTOs should also be updating benchmark assessment samples and validation checklists to account for AI. If an assessor cannot distinguish between a learner’s genuine competency demonstration and an AI-generated response, the assessment tool itself needs redesign. This is not about banning AI; it is about designing assessment conditions that ensure the evidence collected is sufficient, valid, current, and authentic, as the Rules of Evidence have always required.
4. ASQA Expectations Under the Standards for RTOs 2025
4.1 Outcome Standard 1: Training and Assessment with Digital and AI Capability
From 1 July 2025, all ASQA-regulated RTOs must comply with the Standards for RTOs 2025, which shift emphasis from prescriptive compliance to high-quality outcomes, self-assurance, and continuous improvement. Outcome Standard 1 requires RTOs to deliver engaging, industry-relevant training and robust assessment. ASQA’s practice guides clarify expectations for contemporary learning methods, industry currency, and integration of technology where it supports outcomes.
For AI-integrated delivery, this translates into three core requirements. First, RTOs must demonstrate that AI tools are used to deepen learning, through simulations, data analysis, scenario building, and practice activities, not to shortcut the development of competence. Second, trainers must have both vocational competency and digital or AI currency where AI is part of delivery or assessment, evidenced through recent experience using industry-relevant AI tools. Third, documentation must show how AI use supports, rather than replaces, the learner’s performance of the actual elements and performance criteria specified in the unit of competency.
4.2 LLN, Digital Literacy, and AI Readiness at Enrolment
The 2025 Standards expect RTOs to assess language, literacy, numeracy, and digital literacy before enrolment and provide targeted support to avoid enrolling learners who cannot reasonably succeed in their chosen program. Commercially available LLN and digital literacy assessment tools already align with the 2025 Standards and the Australian Digital Capability Framework levels, assessing whether learners have the digital foundations needed for successful training.
In AI-integrated delivery environments, RTOs should extend their pre-training review to include AI-related digital skills where relevant. This might include basic search literacy, the ability to follow procedural instructions in digital environments, and an understanding of privacy and security prompts. The results should inform both induction content, such as an AI literacy orientation module for learners with low digital confidence, and reasonable adjustments that support learners to engage with AI-enhanced training without being disadvantaged.
4.3 Governance, Risk, and Self-Assurance for AI Use
ASQA’s Corporate Plan 2025–26 highlights risk-based regulation, data analytics, and digital transformation in how it monitors providers, with a strong focus on uplifting sector capability, protecting integrity, and encouraging self-assurance rather than compliance for audit day only. Providers with strong, evidence-based self-assurance and quality outcomes may see reduced regulatory burden, while those with poor practices will face more intensive scrutiny.
For AI governance, this means that AI policies must be part of the RTO’s governance framework: approved by the governing body or leadership team, aligned to privacy, cybersecurity, and academic integrity obligations, and regularly reviewed. Risk registers and internal audit plans should explicitly include AI-related risks, such as dependence on vendor tools, data leaks, undisclosed AI-assisted assessments, and staff skills gaps. Quality indicators and self-assurance evidence, including moderation records, student feedback, and completion and outcome data, should be analysed for any negative patterns linked to AI-mediated learning or assessment.
|
Governance Checklist: AI in RTOs 1. AI policy approved by governing body/CEO and aligned to privacy and integrity obligations 2. AI risks included in the risk register and internal audit plan 3. Clear “Use of AI in Student Assessments” policy communicated to all learners and staff 4. Trainer PD records demonstrating AI and digital currency 5. Validation records that address AI-related assessment risks 6. Quality indicator analysis for patterns linked to AI use 7. Regular policy review cycle that captures AI developments |
5. Five Actions RTOs Must Take in 2026
Action 1: Redesign TAS and Learning Resources with AI Literacy in Mind
For each AI-relevant qualification or unit on scope, RTOs should explicitly state in the training and assessment strategy how AI will be used in training, how it will be controlled in assessment, and how this aligns with unit and qualification outcomes. AI-related outcomes, including prompt engineering skills, should be mapped to performance criteria, knowledge evidence, and foundation skills, with this mapping documented in validation and mapping records. Learning materials should teach safe, ethical, and effective AI use, covering prompt design, output verification, documentation of AI use, and respect for intellectual property and privacy.
Action 2: Build Trainer and Assessor Capability in AI
Under the 2025 Standards, trainers must have vocational competency and current skills in training, assessment, and relevant industry practices. Where AI tools are integrated into delivery, RTOs must be able to evidence trainer currency in those tools and in related industry practices, through professional development logs, industry engagement records, and workplace projects. For prompt engineering specifically, this means professional development on how different AI models respond to instructions, context, examples, and constraints; designing prompts that reflect unit performance requirements and workplace contexts; and using AI outputs critically rather than accepting them at face value.
Action 3: Update Student-Facing Policies and Academic Integrity Frameworks
RTOs should implement a clear “Use of AI in Student Assessments” policy that defines permitted, restricted, and prohibited AI use, disclosure expectations, and consequences for misuse. The policy must be communicated to all learners at enrolment and reinforced throughout training. It must also be consistent with the RTO’s obligations around privacy, cybersecurity, and data handling, as highlighted in the National Skills Plan and national cyber capability strategies. Existing academic integrity and plagiarism policies should be updated to explicitly address AI-generated content as a distinct category alongside traditional plagiarism.
Action 4: Extend Pre-Training Review to Include AI and Digital Readiness
Where programs incorporate AI tools in delivery, RTOs should extend their LLN and digital literacy assessments at enrolment to capture AI-related readiness. This might include basic familiarity with digital search tools, the ability to follow structured digital instructions, awareness of data privacy principles, and comfort level with technology-mediated learning. The results should inform both learner support planning and the design of induction or bridging content that brings learners to the level of digital capability required for the program.
Action 5: Prepare for ASQA Audits and Evidence Requests
Given ASQA’s risk-based approach and growing interest in technology-enabled delivery, RTOs should have audit-ready evidence available at all times, not just before scheduled audits. This includes training and assessment strategies, mapping documents, and learning and assessment resources that clearly show how AI is integrated without compromising unit requirements. It also includes AI policies, staff professional development records, examples of internal monitoring such as assessment validation records and AI-related complaints or incidents and how they were resolved, and evidence that learners’ digital and AI-readiness were considered at entry with appropriate LLN and digital support provided.
6. The Broader Picture: AI Skills as a Competitive Advantage
The LinkedIn Skills on the Rise 2026 report highlighted a notable pattern beyond the technical skills themselves. The fastest-growing skills are not exclusively technical. Communication, stakeholder collaboration, cross-cultural competency, governance, ethics, and people development all featured prominently alongside prompt engineering and model training. This mirrors what industry leaders and workforce researchers have been saying for some time: as AI reduces the cost of generating technical output, the premium on human judgement, relationships, and accountability increases.
For RTOs, this convergence of technical and human skills reinforces what competency-based training has always been designed to deliver: graduates who can apply knowledge and skill in workplace contexts, make sound judgements, communicate effectively, and adapt to changing conditions. AI does not replace this educational mission. It amplifies it. The RTO that produces a graduate who can both use AI tools effectively and exercise critical thinking, ethical awareness, and communication skills to use them responsibly will deliver the most value to employers and the economy.
The ICT Training Package redesign, the Standards for RTOs 2025, the National Skills Plan, and the broader workforce signals captured in reports like LinkedIn’s all point in the same direction: AI literacy is now a baseline expectation across the Australian VET sector. The RTOs that act on this in 2026, by embedding AI into their training products, assessment practices, governance frameworks, and professional development programs, will not only meet regulatory expectations but will position themselves as leaders in a rapidly changing skills landscape.
The message for RTOs in 2026 is unambiguous. AI literacy, including practical skills such as prompt engineering, is no longer confined to specialist ICT qualifications. It is a cross-sectoral capability expectation that must be reflected in how RTOs design training products, structure assessments, govern their operations, support learners, and develop their workforce. The regulatory, policy, and industry foundations are already in place. The Standards for RTOs 2025 provide the framework. The ICT Training Package redesign provides the content. ASQA’s regulatory priorities provide accountability. What remains is for RTOs to act, deliberately, strategically, and with the self-assurance mindset that the new regulatory environment demands.
References and Further Reading
ASQA (2025). Standards for RTOs 2025. https://www.asqa.gov.au/rtos/2025-standards-rtos
ASQA (2025). Practice Guides for the Standards for RTOs 2025. https://www.asqa.gov.au/rtos/2025-standards-rtos/practice-guides
ASQA (2025). Artificial Intelligence (AI) Transparency Statement. https://www.asqa.gov.au/about-us/reporting-and-accountability/artificial-intelligence-ai-transparency-statement
ASQA (2025). Corporate Plan 2025–26, ASQA IQ October 2025. https://www.asqa.gov.au/news-events/news/asqa-iq-october-2025
Department of Employment and Workplace Relations (2025). National Skills Plan 2025–26 Update. https://www.dewr.gov.au
Digital Education Council (2025). AI Literacy Framework. https://www.digitaleducationcouncil.com
Future Skills Organisation (2025). ICT Training Package Update. https://www.futureskillsorganisation.com.au/ict-training-package-update
LinkedIn News Australia (2026). Skills on the Rise 2026: The Fastest-Growing Skills in Australia. LinkedIn.com
TLRG (2025). Digital Capability: Preparing for New RTO Standards 2025 and the Australian Digital Capability Framework. https://tlrg.com.au
