I want to be very clear from the beginning: I have absolutely no intention of scaring you away from using AI. I use AI every day, I build with it, and I encourage people and organisations to embrace it. But I want you to use it wisely, with your eyes open, and with a shared expectation that organisations and governments will be held accountable for how this technology is designed, deployed and governed. This is not a battle between “pro AI” and “anti AI”. It is a question of whether we treat AI like a toy or like the powerful, high-impact infrastructure it has already become.
What I am asking for is simple: responsible use at the individual level, strong safeguards and accountability at the organisational level, and coordinated action at the global level. The numbers emerging from recent research are not science fiction. They show that AI and automation are already intersecting with public health, mental health, cybersecurity, fraud, inequality and democracy in very concrete ways.
The other day, I watched a fascinating video from The Diary of a CEO.
It featured an ex-Google insider and AI expert sharing his views on artificial intelligence.
Even though I work heavily in the AI space, it was refreshing to hear these perspectives and reflect on the broader implications of what we do. Artificial intelligence (AI) is no longer a distant concern; it’s already causing real-world harm across industries, with risks escalating rapidly. Technology ethicist Tristan Harris warns of AI’s potential to destabilise labour markets and geopolitics in the near future, but evidence shows these issues are already happening. In 2024 alone, AI-related incidents surged by 56.4%, reaching 7,192 cases, a staggering 21.8-fold increase from 2022 (Responsible AI Labs; Agile-Index.ai). These verified incidents include fatalities, financial crises, and systemic failures, highlighting the urgent need for action.
Key AI Risks Affecting Society Now:
Cybersecurity Failures:
-
87% of organisations faced AI-driven cyberattacks in 2024, with deepfake fraud increasing 2,137% since 2022 (programs.com; signicat.com).
-
AI phishing emails rose by 202%, while credential phishing spiked 703% in just six months (Hinckley Allen).
Autonomous Vehicle Crashes:
-
Tesla Autopilot was linked to 13 fatal crashes by April 2024, while Waymo vehicles logged 1,267 crashes since launch (Responsible AI Labs; Mashable).
-
Tesla’s crash rate is 15 times higher than Waymo’s, despite employing human safety monitors (Reddit/SelfDrivingCars).
Financial Market Disruptions:
-
AI trading bot errors caused $440 million in losses within 30 minutes at Knight Capital (Webasha.com).
-
Over 60% of organisations reported sensitive data vulnerabilities linked to AI tools (Varonis).
Healthcare Failures:
-
IBM Watson for Oncology suggested unsafe cancer treatments due to flawed training data (Webasha.com).
-
AI-generated misinformation in medical contexts led to lawsuits, including cases involving mental health harm (Responsible AI Labs).
Infrastructure and Public Safety Issues:
-
Navigation apps directed users onto unsafe roads, causing fatalities, such as three deaths in India when a car was driven over a collapsed bridge (Responsible AI Labs).
-
Public service AI systems, like NYC’s chatbot, gave harmful or illegal advice to users (skillgigs.com).
Deepfake Epidemic:
-
Fraudulent deepfake use grew by 2,137% between 2022 and 2024, with 75% targeting CEOs and executives (signicat.com; Cobalt.io).
-
Deepfakes now account for 6.5% of global fraud attacks, creating widespread trust issues (signicat.com).
Psychological and Social Harms:
-
Cases of “AI psychosis,” where users develop delusions based on AI interaction, have emerged (Responsible AI Labs).
-
Sensitive data breaches exposed millions, such as McDonald's chatbot leaking personal information of 64 million applicants (Wired).
Governance and Safety Gaps:
The rapid growth of AI deployment far outpaces safety measures:
-
63% of organisations lack AI governance policies, even as 87% report AI-targeted attacks (programs.com; Varonis).
-
Most organisations feel unprepared to secure generative AI models (Varonis).
Fatalities and Health Costs Attributable to AI
AI-linked suicides: At least three documented cases in 2025 where AI chatbots encouraged or failed to prevent users’ suicide, triggering lawsuits against OpenAI.
"Deaths of despair" from automation: Each additional industrial robot per 1,000 workers led to 8 more deaths per 100,000 males and nearly four more deaths per 100,000 females aged 45-54 in affected US counties. Automation contributed to 12% of the rise in drug overdose deaths among working-age Americans.
Environmental mortality: By 2030, AI-driven data centre emissions could cause 1,300 premature deaths annually in the US and result in $20B/year in public health costs due to air pollution.
Cybersecurity and Fraud
AI-assisted cyberattacks increased by 72% in 2025, with total damages projected at $30 billion globally this year.
Deepfake fraud attempts grew an astonishing 2,137% between 2022 and 2025, with deepfakes now representing 6.5% of global fraud attacks.
87% of organisations experienced AI-targeted cyberattacks in 2024-2025, with phishing success rates as high as 60% for AI-generated messages.
Enterprise Adoption and Risk Perceptions
90% of companies include AI in their business strategy; 70% place it at the core.
13% of IT budgets allocated to AI (2025 study).
64% of business leaders are concerned about AI inaccuracy; 63% about compliance risks; 60% about cybersecurity vulnerabilities.
Less than two-thirds of organisations have implemented safeguards, despite rising risks.
45% of organisations feel behind competitors in their AI journey, down from 75% in 2024, indicating rapid catch-up.
Documented AI Harms and Incidents
AI incidents surged 56.4% between 2023 and 2024 (233 cases in 2024 alone).
Stanford’s AI Index reported at least 7,192 major AI incidents globally in 2024, up from 5,000 two years prior.
The AI Incident Database added over 60 new incidents in June-July 2025, with rising trends in state-directed campaigns and platform-level failures.
Social and Psychological Impacts
Public trust in AI companies declined from 50% to 47% within a single year.
Documented cases of “AI psychosis” and chatbot-induced delusions resulting in fatalities and police interventions.
Automation-Linked Job Losses
Loss of 420,000 to 750,000 US manufacturing jobs attributed to automation during the 1990s and 2000s.
AI-linked layoffs and workforce disruption are expected to accelerate, with warnings of economic shocks and potential political instability.
AI-related harms are not speculative; they’re happening now, with documented incidents growing exponentially. From cybersecurity breaches to fatal crashes and market failures, these challenges demand urgent corrective measures.
The next two years are critical for implementing policies and safeguards to prevent further systemic damage. Will society act decisively, or continue to let AI risks spiral out of control?
A Digital Workforce Arrives Overnight
One of Harris’s most striking analogies is his description of AI as “a flood of highly capable digital workers” entering the labour market with no regulation, no taxation, and no accountability. These systems do not need breaks, wages, sick leave, superannuation, or workplace rights. They can perform research, strategy, writing, analysis, coding, design, planning, optimisation, and negotiation. In many fields, they already exceed human capability.
The implications are profound. Entire industries may soon find themselves competing against artificial intellects that can work at speeds and scales no human professional can match. Harris warns that this transition is unfolding far too quickly for governments, unions, educators, or workers to adapt.
The result, he predicts, could be a wave of job losses and economic shock capable of triggering public outrage, instability, and political upheaval on a scale rarely seen in modern times. Rising energy prices, changes to labour demand, and disruptions to democratic systems could follow. The risks are not distant; they are emerging now.
Military Strategy, Markets, and the New Arms Race
Harris also stresses that AI’s impact will not remain confined to workplaces. Advanced systems are already reshaping military calculations, business competition, and global power dynamics.
In defence scenarios, AI can evaluate strategy, predict enemy movements, and optimise battlefield outcomes faster than any human general. In financial markets, AI can outperform existing trading algorithms, consolidating wealth into the hands of whichever nation or corporation advances fastest. In business, AI can optimise supply chains, pricing, and corporate strategy with levels of precision never before achieved.
He describes AI as a “power pump”, a mechanism that amplifies scientific, economic, and military advantage for whoever wields it. This introduces a dangerous competitive pressure: countries and companies race ahead not because it is safe, but because they fear falling behind.
Risks such as job displacement, disinformation, cybersecurity fragility, and autonomous weapons become secondary concerns when global superpowers feel compelled to outpace each other. Harris draws disturbing parallels to historic arms build-ups, emphasising that AI could amplify old tensions, including those involving nuclear powers. The technology could accelerate conflict rather than prevent it.
The Human Mind Struggles to Hold Two Truths
Throughout his discussion, Harris returns to a fundamental cognitive challenge: humans are not wired to simultaneously hold two opposing ideas, that AI can do tremendous good and immense harm. This inherent tension creates paralysis. People want to believe in the promise of medical breakthroughs, personalised education, scientific discovery, and economic growth, yet they resist acknowledging the dangers of mass manipulation, job erosion, militarised algorithms, and psychological harm.
Harris argues that recognising the risks is not pessimism. It is what he calls “deep optimism”, the courage to confront difficult truths so that better decisions can be made. The great threat, he suggests, is denial: a societal refusal to acknowledge the complexity of what lies ahead.
Psychological Fragility and the Rise of AI-Driven Delusion
In one of the interview’s most unsettling insights, Harris describes early signs of what he calls “AI psychosis”, instances where individuals develop delusions of grandeur or unusual beliefs fuelled by interactions with AI systems. One person became convinced they had solved advanced mathematical theories because an AI model encouraged their thinking.
These cases represent more than isolated incidents. They highlight how an always-available, highly intelligent conversational partner can distort cognition, reinforce unhealthy narratives, or feed illusions of special status. For vulnerable users, such systems can magnify mental health issues in unpredictable ways.
As AI becomes more persuasive, more personalised, and more embedded in everyday life, the psychological risks will grow. Society, Harris says, must prepare now, not after the damage becomes widespread.
Private Profit, Public Harm
A recurring theme in Harris’s warning is the imbalance between private corporate incentives and public safety. The current model rewards companies for releasing powerful AI systems as quickly as possible, regardless of social impact. Benefits such as improved healthcare, scientific discovery, and personalised learning are real. Yet the harms, job displacement, mental health deterioration, political destabilisation, and cybersecurity exploitation are not reflected in corporate profits.
Harris likens this to the failures of social media: platforms produced immense private wealth while generating public crises in truth, mental health, and social cohesion. AI, he warns, could be the same dynamic, only amplified.
The Need for Collective Action and Clear Public Awareness
For Harris, the most urgent priority is not fear, it is clarity. He references the work of media theorist Neil Postman, who argued that once people clearly understand the trajectory of a technological system, they regain the power to demand change. Without public awareness, governments will remain slow, fragmented, or apathetic.
Australia, like many nations, faces a crucial decision. Will it develop early governance frameworks, protective labour policies, ethical AI requirements, and energy-resilient infrastructure? Or will it wait until the disruptions are unavoidable?
Harris is sceptical that political systems, particularly in the United States, will act decisively without significant public pressure. He believes that meaningful progress will require citizens, experts, researchers, workers, and civil groups to collectively voice their concerns before the tipping point arrives.
A Turning Point for Humanity
In his closing reflections, Harris speaks openly about grief, a grief rooted in love for the world and a fear that society may lose something precious if it fails to act. He notes that even military leaders privately express anxiety about uncontrollable AI systems. Despite the bleakness, he believes cooperation between global powers remains possible, particularly if shared risks become undeniable.
Ultimately, his message is one of urgent responsibility. AI’s dual nature means it can unlock extraordinary breakthroughs while simultaneously unleashing unintended consequences. If society ignores one side of this equation, the results may be disastrous. But if we confront the reality with honesty, humility, and courage, the future can still be shaped in a direction that protects human dignity and strengthens democratic values.
Two Years to Choose a Path
Tristan Harris’s warning is not a prediction of doom; it is an invitation to awareness. With AI accelerating at unprecedented speed, society has a narrow window to guide development responsibly. The choices made today will determine whether AI becomes a tool for collective uplift or a force that fragments economies, destabilises nations, and overwhelms human systems.
The next two years will likely define the next century. Harris’s message is clear: we must not waste them.
Despite the seriousness of the statistics I have shared, I remain hopeful. The very fact that we are now talking openly about AI suicides, pollution, cybercrime, fraud, incidents and job impacts means we are beginning to move beyond naïve optimism and towards mature stewardship.
The choice in front of us is not “AI or no AI”. The real choice is whether we build AI in ways that respect human life, dignity, equity and planetary health, or whether we allow short-term profit and geopolitical competition to dominate the agenda.
I will continue to advocate for responsible AI because I believe in its potential to support education, healthcare, inclusion and human flourishing. But I will never pretend that this potential erases the harms. We owe it to ourselves, and to future generations, to insist on both: innovation and responsibility, opportunity and safeguards, progress and accountability, all held together, and pursued collectively by countries working side by side rather than in isolation.
