Artificial intelligence is no longer a distant frontier—it is here, reshaping economies, industries, and everyday work. Across Australia, the conversation about AI is growing louder, but the debate risks being diluted by political indecision, fragmented adoption, and weak governance. As the federal government holds Economic Reform Roundtables, the early message is clear: sweeping new laws to control AI’s risks are unlikely. Instead, the focus is shifting toward practical reforms that enable organisations to use AI responsibly, harness productivity, and rebuild community trust. In parallel, the Governance Institute of Australia has launched its AI Governance Expert Advisory Panel, an initiative designed to guide business leaders on embedding safe and ethical AI in the workplace.
The mission could not be more urgent: ensure AI enhances work and society without eroding rights, safety, or trust. This requires businesses to understand existing regulatory frameworks, educate leadership teams, and clearly communicate their governance strategies. Without these measures, Australia risks falling behind international peers while exposing workers and consumers to the growing threat of “shadow AI”—the use of unregulated tools by employees, often with no safeguards around privacy, data security, or accuracy.
A Workforce Unprepared for AI
Evidence shows a troubling gap between AI’s potential and the reality of its use in Australian organisations. Large corporates may have the resources to experiment responsibly, but small and medium enterprises and not-for-profits frequently lack the expertise and governance structures required. The result is inconsistent adoption and increased risk. Surveys suggest many employees are already using generative AI tools independently, uploading confidential information to offshore systems without oversight—creating risks that could undermine intellectual property, breach consumer rights, or damage supplier relationships.
This “shadow AI” problem is a symptom of underprepared governance. Boards and senior executives often lack the knowledge to evaluate the safe and ethical deployment of AI, leaving organisations exposed. Upskilling leadership and staff at every level is essential—not only for compliance, but for maintaining public confidence in how technology is used in business and government.
Guardrails That Put People First
The Governance Institute has consistently argued that AI adoption must be human-centred. This means ensuring that fairness, transparency, accountability, and contestability are at the heart of every system deployed. Many of the tools currently entering Australian workplaces are imported, “off-the-shelf” products that have not been designed with local compliance or human-rights protections built in. Without oversight, they risk embedding bias into hiring, eroding workplace rights through algorithmic monitoring, or producing automated decisions that individuals cannot contest.
The stakes are highest for the most vulnerable in society. Without protections built into design and deployment, those already marginalised could be further disadvantaged. Human-centric frameworks demand more than rhetoric—they require practical measures to integrate ethical safeguards into business processes and regulatory frameworks.
Productivity and Reform: Australia’s Immediate Needs
As part of the national economic reform discussions, the Governance Institute has recommended targeted actions. These include regulatory “sandboxes” and safe harbour provisions to allow innovation while setting responsible boundaries, increased investment in the National AI Centre to bring Australia’s funding into line with comparable economies, and the creation of government-backed verification tools for AI systems.
Privacy reforms are also crucial, ensuring Australians retain clear rights over how their data is used. Ethical guardrails should be mandatory in high-risk use cases, alongside education and workplace incentives to boost AI literacy. Without widespread understanding across the workforce, AI’s promised productivity gains will remain out of reach.
International Lessons
Globally, countries are racing ahead to define the rules. The European Union’s AI Act has introduced risk-based compliance obligations, the United States is focusing on sector-specific regulation and voluntary frameworks such as NIST’s Risk Management Framework, while Singapore and Japan are building trust through practical governance models.
If Australia hesitates, it risks becoming a policy follower, forced to adopt standards set offshore. That could undermine national sovereignty and burden local businesses with compliance regimes designed elsewhere. Clear domestic standards are needed not only to protect consumers but also to ensure Australian organisations remain competitive in global markets.
Where the Risks Are Most Visible
Every sector faces its own challenges. In healthcare, AI diagnostic systems offer immense promise but may entrench inequality if based on biased datasets. In education, algorithmic grading raises concerns over privacy and fairness. Financial services already face global scrutiny for bias in lending and insurance algorithms, and Australia will not be immune. The public sector has had its own painful lessons: the Robodebt scandal showed how damaging automated systems can be when human oversight and contestability are absent.
Data Sovereignty and the Shadow Economy
Unregulated use of AI creates vulnerabilities that go beyond privacy. Many AI systems default to offshore data storage, creating risks under the Privacy Act and exposing sensitive information, including government data, to foreign jurisdictions. At the same time, supply chains are vulnerable: imported AI systems may contain embedded flaws or backdoors, making organisations targets for cyberattacks.
Closing the Skills Gap
Australia’s broader technology skills shortage compounds these risks. Projections suggest a gap of 200,000 tech workers by 2030, with AI specialists—data scientists, ethicists, and compliance experts—especially scarce. Building capability cannot rest on universities alone. Vocational education, lifelong learning, and targeted TAFE programs must become central to building AI literacy across the economy, especially for SMEs and NFPs. Without such investment, the most vulnerable sectors will continue to lag.
Law, Policy and the Missing Pieces
Australia’s legislative frameworks are struggling to keep pace. The Privacy Act modernisation is not yet complete, leaving gaps around automated profiling and explainability. Consumer protection laws do not yet adequately cover deceptive AI products such as deepfakes. Workplace protections may not address AI-driven rostering, monitoring, or algorithmic decision-making, raising questions for the Fair Work Commission and safety regulators. Without clarity, organisations face uncertainty, and workers risk harm.
Economic Stakes and the Innovation Challenge
The economic opportunity is immense. AI could add $115 billion annually to Australia’s economy by 2030—but only if adoption is safe, coordinated, and widespread. SMEs, which form the backbone of the economy, risk being left behind without government support. At the same time, without incentives such as regulatory sandboxes, start-ups may relocate innovation offshore, taking with them talent and intellectual property. Australia cannot afford this kind of innovation flight at a time when global competition is intensifying.
Embedding Ethics in Practice
Strong AI governance goes beyond ticking compliance boxes. It must engage with deeper cultural and ethical principles, including Indigenous data sovereignty frameworks. Communities must be assured that AI will not increase inequality or undermine dignity. Workers and citizens must have rights of contestation when decisions affecting them are automated. Above all, trust will depend on visible transparency at every stage of deployment.
A National Roadmap
Australia’s path forward must include a comprehensive AI strategy aligned with global standards but responsive to local needs. Independent third-party verification services could serve as an “AI audit system” similar to financial auditing. Public awareness campaigns, modelled on cybersecurity initiatives, can help citizens understand how to use AI safely and what rights they hold. The creation of an AI Ombudsman or regulator should also be considered to provide accountability and handle disputes.
Conclusion: Building the Foundations of Trust
AI offers extraordinary potential for productivity, innovation, and national growth. Yet its promise will only be realised if it is underpinned by strong governance, ethical guardrails, and clear policy. Without these, Australia risks fragmented adoption, widening inequality, and a dangerous erosion of trust. With them, the country can build a resilient, innovative economy that empowers workers, strengthens communities, and ensures progress remains human-centred.
Policy certainty is not an optional extra. It is the foundation of Australia’s ability to navigate the next technological frontier with confidence. The choice is stark: lead with vision, or accept rules imposed from elsewhere.