A nation debating AI from the Hume Highway to the head office
In 2025, artificial intelligence is as likely to be debated on long drives between Wagga and Albury as it is in boardrooms in Sydney or classrooms in Perth. Parents compare notes on homework apps that “sound a bit too clever,” teenagers swap prompts like they once traded playlists, and small-business owners wonder whether a new tool might shave hours off invoicing—or quietly push a job out the door. It’s a country-wide conversation, animated by a paradox Australians know well: we are practical adopters of useful technology, but we are hard-bitten sceptics about hype, power, and risk. That ambivalence shows up clearly in the data. Australians use AI at scale, but trust it much less than most of the world and worry—loudly and legitimately—about where it is taking us.
Use is mainstream; trust is not
By mid-2025, Australians will be regular users of AI. Half of us use AI tools in everyday life, and a clear majority expect tangible benefits, chiefly relief from repetitive work. Yet willingness to trust AI sits in the mid-thirties, and concern about negative outcomes is strikingly high. Put simply, use is racing ahead while trust lags behind. The latest Australia-specific snapshot compiled by the University of Melbourne and KPMG captures that split: 36% say they are willing to trust AI; 65% expect AI to deliver a range of benefits; 72% point to time saved on repetitive tasks, but 78% are concerned about potential negative outcomes. Those fears are not abstract. Australians nominate the loss of human connection as a top risk, mirroring discomfort with purely automated experiences in services, care and education.
Australians also want firmer guardrails. More than three-quarters say regulation is required, and only about a third believe current safeguards are adequate. When the topic turns to misinformation, appetite for action hardens: nine in ten Australians want laws and practical measures to combat AI-generated falsehoods. These national figures sit alongside global signals pointing the same way. Across 47 countries in the 2025 study, 87% of respondents want laws and credible fact-checking to tackle AI-driven misinformation, and while regular use is widespread worldwide, trust remains fragile—especially in advanced economies like Australia.
A cautious outlier in a world racing ahead
If Australia sometimes sounds unusually wary about AI, that’s because we are. Compared with many countries, Australians are less likely to say the benefits outweigh the risks and more likely to report worry alongside use. The global study shows advanced economies report markedly lower trust and acceptance than emerging economies, and Australian indicators fall on the low end of that advanced-economy band. In short, our pattern is not isolationist—it’s part of a broader trust deficit in wealthy nations—but it is nonetheless pronounced in the Australian context. Understanding that distinction matters for policy: it’s not that Australians are anti-technology; it’s that we want clearer value, better governance, and more say.
The youth perspective: career calculus in an anxious decade
For younger Australians, AI is reshaping the “what next?” questions that accompany school, TAFE, and university. Fresh research on young people’s perceptions and use of generative AI points to a new kind of career calculus: about 38% of young people report concern about job displacement tied to generative AI, and nearly one in five say they have reconsidered their study or career plans as a result. Notably, many are not fleeing the field so much as pivoting towards it, eyeing technology pathways or hybrid roles that integrate AI into other disciplines. That dual response—worry and re-orientation—captures a generational realism: AI is not going away, so the choice is skill up or get sidelined.
This ambivalence also filters into education choices and experiences. Students are eager to acquire practical AI skills for work and study, yet they are wary of shortcuts that corrode learning. The global survey confirms regular student use of AI for study is now the norm, but it also flags the risks of over-reliance and diminished critical thinking unless education providers set clear expectations and teach responsible, evidence-based use. That balance—promoting capability while protecting integrity—is emerging as the defining challenge for schools, VET providers and universities alike.
Workplaces on the frontline of adoption—and error
The fastest, messiest phase of Australia’s AI transition is happening inside workplaces. Employees report that 65% of organisations now use AI, and roughly half of workers intentionally use AI at work on a regular basis. Benefits are real: more than half report improvements in efficiency, quality and innovation, and a substantial minority say AI is now woven into revenue-generating activity. But the risk signals are just as clear. More than half of workers admit relying on AI output without checking its accuracy, a majority say they’ve made mistakes at work because of AI, and a significant share acknowledge using AI in ways that contravene policies or presenting AI-generated content as their own. When adoption outruns governance, error and ethical lapses become a foreseeable cost of doing business.
That tension occasionally breaks into public view. In recent months, Australia’s largest bank walked back a small tranche of AI-related layoffs under pressure from the Finance Sector Union, a visible example of workforce anxieties meeting governance in real time. Across the economy, a cooling labour market has heightened sensitivity to technology-driven restructuring, even as businesses push on with automation to lift productivity. The upshot is a workplace settlement still in the making: workers want a fair say and fair protections; employers want efficiency; both sides want clarity.
The skills and literacy gap we cannot ignore
Skills are the fulcrum of trust. Australians tell researchers they want to use AI well, but too many feel under-prepared. Only about a quarter report any formal or informal training in AI or related fields, and just over a third say they have the skills and knowledge to use AI appropriately. By contrast, the global study suggests close to four in ten people worldwide have had some AI training, a gap that helps explain Australia’s lower confidence and higher worry. It is difficult to trust what you don’t understand—and doubly difficult when workplace policies are inconsistent and training is patchy. Closing that gap is not just a matter of digital literacy modules; it means designing practical, hands-on learning for different roles, industries and regions, and linking skills development to safe, productive workflows.
The equity dimension is just as important. The benefits of AI tools accrue fastest to people with recent training, stable devices and good connectivity. Without deliberate effort, that bakes in a two-speed workforce: metro over regional, large employers over small ones, higher-income knowledge workers over frontline staff. Australia’s experience with earlier waves of technology offers a simple lesson: when capability-building is optional, gaps widen. When it is funded, measured and embedded, productivity gains and trust rise together.
Misinformation, safety and a new civic contract
Australians are not only worried about how AI affects jobs; they are unsettled by how it might deform the information environment that underpins daily life and democracy. Nine in ten residents want laws and concrete action to combat AI-generated misinformation; nearly eight in ten say they are unsure whether online content can be trusted because it may be AI-generated; and a majority fear the manipulation of elections by AI-generated content or bots. Those numbers are less about panic than prudence. They reflect a citizenry that recognises generative models are powerful and cheap to deploy, and that the costs of letting untrustworthy content flood the zone will show up in everything from public health to community cohesion to the legitimacy of institutions.
Civil society is already reframing the issue as a safety question, not merely a speech question. Australia’s peak body for suicide prevention has urged governments to mandate “safety by design” approaches for social platforms and AI providers, warning that poorly governed digital spaces compound existing risks for vulnerable people while obscuring opportunities for timely, life-saving connection. This lens broadens the regulatory conversation beyond abstract principles toward design standards, oversight and accountability—exactly the terrain Australians say they want policymakers to occupy.
The hidden infrastructure of intelligence: power, water and place
It is tempting to imagine AI as weightless software. In reality, intelligence runs on infrastructure: powered data centres and supercomputers that consume electricity and, depending on design and climate, vast volumes of water for cooling. As Australia pursues sovereign AI capability and fuels cloud growth, scrutiny of this physical footprint is intensifying. National reporting and parliamentary inquiry material point to escalating demands for energy and water, and warn that unmanaged growth risks colliding with climate goals and local resource constraints. The ABC recently detailed the challenges of meeting AI’s hunger for clean energy while decarbonising other parts of the economy; a Senate committee has similarly called out water, land and environmental pressures intrinsic to scaled AI infrastructure in Australia. Policy responses—from efficiency standards and NABERS targets to siting decisions and grid upgrades—will shape whether the “intelligent age” complements or compromises our sustainability ambitions.
There is no single technical fix, but the innovation frontier is shifting. Advanced liquid-cooling and chip-level thermal designs promise dramatic reductions in water use, and industry bodies are pushing water stewardship frameworks suited to a hot, drought-prone continent. Those options do not erase the trade-offs, but they show how design choices—made early and publicly—can align digital capacity with environmental realities. For a country attentive to both tech opportunity and land-and-water limits, this is not a side issue; it is a core determinant of social licence.
What Australians say they want from AI governance
Across the research, a clear pattern emerges. Australians are open to AI that demonstrably helps, but they want enforceable rules and visible accountability. About 77% say regulation is required, only 30% think current safeguards suffice, and expectations for oversight favour a mix of government action, co-regulation with industry, and international rules. That governance appetite is not a call to “ban the future.” It is a demand to line up incentives so that responsible builders and users win, and shortcuts that externalise harm lose. In practical terms, that means standards for testing and assurance, transparency about where and how AI is used, red-teams and audits proportionate to risk, and accessible remedies when things go wrong. It also means measurable commitments to workforce transition and skills, so that productivity gains don’t translate into concentrated private benefits and diffuse public costs.
From fear to fluency: a roadmap for an Australian AI settlement
If “fear versus adoption” is the wrong frame, “fluency versus fragility” is the right one. Fluency is what happens when people have the skills, context and resources to use AI confidently and critically. Fragility is what happens when they do not. Moving Australia from fragility to fluency is a national project that runs through classrooms, shop floors, branch offices and policy rooms.
At school and in tertiary education, fluency begins with teaching students how these systems work, what they do well, where they fail, and how to interrogate outputs. That includes guidelines for attribution, originality and privacy; task design that rewards synthesis and applied judgement over rote reproduction; and explicit training on bias, hallucination and data provenance. When students learn to use AI as a scaffold—not a shortcut—they become better, more employable humans. The global study’s education findings reinforce this: student use is high, benefits are real, but guardrails and provider guidance must catch up.
In workplaces, fluency looks like policies that are not just restrictive checklists but living playbooks aligned with actual work. It looks like leaders are modelling “critical use”: seeking gains in speed and quality without outsourcing judgment. It looks like safe sandboxes for experimentation, procurement that values explainability and auditability, and performance systems that reward verification and teamwork, not just throughput. Critically, it looks like training that is specific to roles—different for a customer-service team than a field technician—and accessible to SMEs and regional employers, not only big-city head offices. Australia’s own data shows many organisations already have AI strategies on paper; the challenge now is to translate them into everyday practice that reduces error and lifts trust.
For policymakers, fluency requires a visible, layered regime. First, make it easy to see where high-stakes AI is being used—in health, finance, education, employment screening and public services—and demand impact assessment and assurance commensurate with risk. Second, fund and mandate high-quality AI literacy across the economy, with special attention to groups at risk of being left behind. Third, treat misinformation and safety as infrastructure issues, not just content issues, by setting expectations for provenance, labelling and rate-limiting, and by resourcing independent evaluation of safety claims. The public’s near-consensus for action on AI-generated misinformation is a rare chance to build legitimacy before problems metastasise.
Finally, align the AI build-out with our physical world. That means planning for energy and water demand before projects break ground; encouraging next-generation cooling and siting near renewables; and integrating community voices into decisions that change neighbourhoods and resource flows. If we want sovereign capability without social backlash, sustainability cannot be an afterthought—it must be a design requirement.
The cultural work of trust
Trust is not a spreadsheet metric; it is cultural work. Australians are pragmatic: we will use the tool that helps, then we will keep using it if it keeps proving itself. The research suggests a viable path. When people are trained, they trust more; when they understand the benefits and see real safeguards, they accept more; when they have agency and recourse, they adopt with fewer errors. These are not grand philosophical shifts so much as practical preconditions for a fair transition. The same pattern holds in the workplace: when policies are known and modelled by leaders, when governance keeps pace with use, error rates fall and enthusiasm rises. Australia’s relatively low baseline trust is not a permanent trait; it is a changeable outcome linked to the quality of implementation we choose.
A fair deal for the AI age
So where does that leave the family car conversation on the Hume—or the staff meeting on Monday morning? In both places, the answer is the same. Australians don’t want AI to be a black box that decides for us. We want tools we can see into, skills we can build, and rules we can rely on. We want fewer breathless claims and more verifiable improvement—shorter waiting times, better health triage, safer worksites, smarter public services, less drudgery at work. We want the benefits spread fairly, not pooled in a few balance sheets or postcodes. And we want the costs counted honestly, including the energy and water that make “intelligence” possible in the first place. The research gives us a clear mandate: regulate sensibly, teach relentlessly, design for safety, and build with the country’s physical and social limits in mind. If we can do that, we will convert today’s hesitation into tomorrow’s confidence—and make the intelligent age feel unmistakably Australian.
Sources: University of Melbourne & KPMG, Trust, attitudes and use of artificial intelligence: A global study 2025 and Australia Insights; YouthInsight/Student Edge, Young People’s Perception and Use of Generative AI; ABC News reporting on data centre energy and water demands; Australian Senate committee material on environmental impacts of AI; Deloitte Access Economics on 2025 labour market conditions; American Banker coverage of CBA’s AI-related workforce decision.