back to top
Monday, October 6, 2025

‘Fail Fast, Learn Faster’: Andrew Forrest’s Push to Redefine Failure and Cure Cancer Through AI

Share

Mining magnate Andrew “Twiggy” Forrest is staking his legacy on one bold idea: that failure is not the enemy of success—it is its fuel. In recent remarks, Forrest revealed how embracing missteps and setbacks has shaped his strategy, particularly as he pivots toward a new mission: making cancer “non-lethal” through artificial intelligence. His journey — from the mines to medicine — offers lessons for business leaders, scientists, and policymakers alike.


A Mindset Shift: Turning Stumbles into Strategy

For decades, Forrest built his fortune in iron ore, taking risks in a volatile sector. He says many of his most important breakthroughs came after mistakes. In his view, failure is not a stigma but a signal—a chance to pause, recalibrate, and move forward wiser.

He often cites a phrase: “fail fast, learn faster.” The idea is simple. If you are going to fail, do so early, extract the learning, and iterate. Then move on. He argues that too many enterprises fear failure, and that caution dulls innovation.

In the tech world, this mindset has gained traction. Startups launch minimum viable products (MVPs), collect feedback, and evolve. Forrest wants that same tempo applied in sectors that traditionally move slowly: health, regulation, infrastructure. He says that risk management is not about avoiding failure, but about surfacing failures early and mitigating downstream harm.

That philosophy now underpins his most ambitious effort yet.


From Iron to Cancer: Forrest’s New Frontier

Forrest has long demonstrated a keen interest in philanthropy, particularly in areas like Indigenous employment, poverty, and education. Lately, though, he’s focused on healthcare — and cancer sits at the center of his mission.

He frames cancer not simply as a disease to cure, but as a puzzle to manage. His goal: transform it into a largely non-lethal condition, using AI tools to detect, monitor, and personalize treatment in ways currently impractical. He envisions a future where diagnosis is earlier, interventions more precise, and patient outcomes vastly improved.

To realize this, Forrest proposes three core pillars:

  1. Massive data collection and integration. AI thrives on data. He plans to fund efforts that aggregate genetic, imaging, clinical, lifestyle, and environmental data at scale, while navigating privacy, ownership, and ethics.
  2. Open-source platforms and shared models. Rather than locking discoveries behind proprietary walls, he argues for collaborative frameworks that allow researchers globally to build, test, and improve algorithms.
  3. Rapid experimentation and feedback loops. In the spirit of “fail fast,” AI models would be deployed in controlled trials, adjusted, redeployed, and refined, rather than waiting for decades of static trials.

Forrest does not pretend this is easy. He recognizes that cancer biology is deeply complex. But he believes the confluence of AI, computing, and biomedical research has matured to a point where the risk-reward balance now favors bold bets.


International Diplomacy and AI Governance

Forrest’s ambitions in health are matched by his concerns about AI’s broader societal impact. He has been quietly steering dialogues on AI security and regulation — especially between superpowers.

Over five years, the Minderoo Foundation has backed 11 rounds of a “Track II” dialogue between U.S. and Chinese policy experts. The goal: find common ground on nuclear command, autonomous systems, and human oversight. In late 2024, an outcome from the dialogue influenced a global statement: leaders from the U.S. and China agreed that humans must retain control over nuclear decision-making. (Forbes Australia)

Forrest frames his role as both mediator and doer. He says leading nations must adopt “people-centred AI” policies that put human safety ahead of short-term gain. He warns that unconstrained AI in military systems could become a “terrible enemy,” stripping away empathy and turning precise targeting into algorithmic error. (The West Australian)

In Canberra and Washington, he is pushing for regulation that is flexible, transparent, and risk-aligned, rather than heavy-handed or rigid. He argues that policy must respond to evolving technology, not stifle it.

That tension — between enabling innovation and preventing harm — lies at the heart of his strategy.


While Forrest publicly debates AI’s future, he is fighting one of its darker manifestations in the courts. He has sued Meta (owner of Facebook and Instagram) over fraudulent ads that misuse his likeness. According to filings, his image has been used in more than 230,000 scam ads since 2019. (Australia Times)

One such deepfake video showed Forrest endorsing a crypto trading scheme. It was circulated widely and triggered alarm about how easily AI can distort identity and truth. (CCN.com) His case seeks not only damages, but disclosures from Meta about how its ad systems permit such misuse.

Forrest frames this legal fight as part of his broader mission: to hold tech platforms accountable and deter future harm. He argues that responsibility must rest not just with bad actors, but with the systems that enable them.

This trial could set important precedents for digital platforms, deepfake liability, and the limits of corporate control over AI-powered tools.


Actionable Takeaways for Leaders and Innovators

Forrest’s story is more than personal ambition. It suggests a roadmap — or at least signposts — for how institutions and leaders can navigate disruption.

1. Redefine failure as data, not defeat

Embed short feedback loops. Pilot bold ideas in small settings. Learn quickly and iterate. Avoid grand, untested commitments where failure becomes catastrophic.

2. Invest in data infrastructure and interoperability

AI requires diverse, rich data. Governments and firms should collaborate on common standards — for safe sharing and joint progress.

3. Anchor AI strategy in purpose, not just profit

Forrest underscores that AI efforts rooted in human welfare (health, security, equity) are less likely to sideline ethics in pursuit of returns.

4. Shape smart regulation now

Waiting until crises emerge is too late. Progressive frameworks must evolve in step with technology. Policymakers should engage technologists early, not after harm arises.

5. Treat brand and identity as assets to protect

Forrest’s deepfake litigation underscores that in an era of AI-enabled impersonation, image protection must be proactive. Legal, technical, and communications defenses need coordination.


Risks, Critiques, and Realism

Even Forrest’s bold vision attracts pushback. Critics mention that medical AI projects often fall prey to overpromise, lack of generalizability, and data bias. Implementing AI in healthcare involves regulatory burdens, privacy concerns, clinical risk, and institutional inertia.

Some also question whether billionaire-led efforts risk centralizing control over health or tech agendas. Transparency and inclusive governance will matter greatly as Forrest’s proposals scale.

On the geopolitical front, dialogues between superpowers are nonbinding by design — they can influence norms, but they cannot enforce compliance. Some analysts warn that real AI arms races will occur in secret or in lagging regulation.

Finally, the litigation with Meta is high-stakes but uncertain. Courts worldwide are still catching up to AI’s challenges, making precedent unpredictable.


The Road Ahead: Scale, Momentum, and Measuring Progress

Forrest’s ambitions will be measured in years, not months. But there are concrete milestones to watch:

  • Launch and expansion of AI cancer research hubs and data platforms
  • Performance benchmarks: early detection rates, survival improvement, adverse events
  • Publication of open-source models and uptake by independent researchers
  • Adoption of people-centred AI frameworks in government policy
  • Outcomes of the Meta litigation and resulting disclosure obligations
  • Uptake of international AI safety norms stemming from U.S.–China dialogues

If even a fraction of Forrest’s vision succeeds, the impact would ripple across health, tech, trade, and regulation.


Across his career, Andrew Forrest has shown a willingness to climb big hills. Mining, philanthropy, climate, social justice — he has repeatedly placed capital behind his convictions. Now, with cancer and AI in his crosshairs, he’s attempting what many would call the impossible.

But perhaps that is the point. To fail fast is to test audacity early. To learn faster is to sharpen purpose. And to persist — despite stumbles, uncertainty, skeptics — is to chase transformation. In that journey lie insights not just for Andy Forrest, but for any leader trying to push what’s possible.

Read more

Local News