When Fiction Becomes Field Manual: Why We Need to Break Up with Big Tech

Or: How I Learned to Stop Worrying and Delete My Instagram

Fabrizio Deliberali | December 2025


There are lines in society that exist for a reason. They’re not arbitrary. They’re load-bearing. They hold up the entire structure of democratic society, economic integrity, and human safety.

Meta hasn’t just crossed them — they’re systematically dismantling them for profit.

And now we have the data to prove it.


Here’s an uncomfortable thought: Aldous Huxley wasn’t writing science fiction in 1932. He was writing a manual, and we’ve been following it with disturbing precision.

I recently finished Sarah Wynn-Williams’ explosive memoir Careless People, her withering insider account of Facebook’s toxic culture and “lethal carelessness.” The book shook me so profoundly that I immediately returned to Brave New World — a novel I hadn’t touched in years. Reading Huxley again through the lens of Wynn-Williams’ revelations, something clicked into place with disturbing clarity: the parallels aren’t literary curiosities anymore. They’re alarm bells.


The New Careless People

Wynn-Williams, a former Facebook executive who worked directly with Mark Zuckerberg and Sheryl Sandberg, borrows her book title from F. Scott Fitzgerald’s The Great Gatsby:

“They were careless people, Tom and Daisy — they smashed up things and creatures and then retreated back into their money or their vast carelessness.”

The careless people of the 1920s never went away. They just got venture capital funding and moved to Silicon Valley.

In her darkly funny and genuinely shocking account, Wynn-Williams portrays Zuckerberg and Sandberg not as evil geniuses, but as something potentially more dangerous: narcissistic, self-obsessed executives whose public images are wildly at odds with their actual selves. These aren’t isolated character flaws — they’re features of a leadership class operating with the ethical framework of toddlers wielding flamethrowers.

We should run away from executives like that. Create empty space around them. Strip them of their relevance and their power.


The December 2025 Verdict: The Data Confirms Everything

And now we have the independent data to prove what Wynn-Williams witnessed firsthand.

The Future of Life Institute released its AI Safety Index Winter 2025 on December 3rd, and the results validate every alarm bell in Careless People.

Eight leading AI companies were evaluated across 35 safety indicators spanning six critical domains. The verdict on Meta: Grade D (score 1.10 out of 4.0)

CompanyGradeScore
AnthropicC+2.67
OpenAIC+2.31
Google DeepMindC2.08
xAI (Musk)D1.17
Z.aiD1.12
MetaD1.10
DeepSeek (China)D1.02
Alibaba Cloud (China)D-0.98

Read that again. Meta — the company that controls Facebook, Instagram, and WhatsApp with over 3 billion daily users — scored essentially the same as Chinese AI companies operating under an entirely different regulatory regime. The company Americans trust with their data, their photos, their family connections… ranks at the bottom alongside firms that operate under Beijing’s surveillance apparatus.

The most damning finding? On existential safety — the measure of whether companies have credible plans to prevent catastrophic misuse or loss of control over superintelligent AI — Meta scored an F grade (0.33 out of 4.0).

The company Wynn-Williams described as “lethally careless” now has the independent safety scores to match.

Prof. Stuart Russell, UC Berkeley computer science professor and one of the independent reviewers, put it starkly: “AI CEOs claim they know how to build superhuman AI, yet none can show how they’ll prevent us from losing control — after which humanity’s survival is no longer in our hands.”


When Even the Founders Run Away

Perhaps the most damning indictment comes not from critics or safety researchers, but from the people who built these platforms.

Brian Acton, co-founder of WhatsApp, walked away from $850 million in unvested stock in 2017 because he couldn’t stomach what Facebook wanted to do with his creation.

Acton revealed that he was coached by Facebook executives to mislead European regulators about the company’s plans to merge Facebook and WhatsApp data. After the acquisition was approved, Facebook did exactly what Acton had been told to deny. The EU eventually fined them €110 million — pocket change for a company that just pocketed $19 billion.

So Acton walked away. Left $850 million on the table. Then invested $50 million to build Signal.

When WhatsApp’s founder literally pays hundreds of millions to escape Facebook and build an alternative, that tells you everything.


The $16 Billion Fraud Machine

In November 2025, Reuters published a devastating investigation revealing that Meta is earning a fortune from fraudulent ads — an estimated $16 billion annually from advertisements that deceive users, sell counterfeit products, or run outright scams.

This isn’t accidental. The documents show Meta knows. They’ve built systems sophisticated enough to target you with ads based on conversations you’ve had near your phone, but apparently not sophisticated enough to detect obvious fraud patterns.

They did the math. The fines are cheaper than stopping.

Sixteen billion dollars worth of fraud. Every year. From a company led by the “careless people” Wynn-Williams describes.


The Sacrifice of Our Children

Here’s what keeps me up at night as a father.

The evidence on social media and youth mental health is now overwhelming. Studies show teenagers using social media more than 3 hours daily face double the risk of depression and anxiety. Girls are particularly affected — the algorithms are optimized to make them feel inadequate, and they’re working exactly as designed.

We’re watching an entire generation struggle with mental health crises engineered by algorithms optimised for engagement over wellbeing.

Prof. Tegan Maharaj of HEC Montréal, one of the AI Safety Index reviewers, put it devastatingly:

“If we’d been told in 2016 that the largest tech companies in the world would run chatbots that enact pervasive digital surveillance, encourage kids to kill themselves, and produce documented psychosis in long-term users, it would have sounded like a paranoid fever dream. Yet we are being told not to worry.”

My daughters deserve better than algorithms designed to make them feel inadequate.


The Pattern: Five Lines Crossed

Let me be clear about what lines Meta has crossed. These aren’t hypothetical concerns. They’re documented patterns.

Line 1: Economic Fraud — The $16 Billion Scandal

As detailed above, Reuters reported that Meta earns an estimated $16 billion annually from fraudulent ads — scams, counterfeits, and deceptive schemes. The company has the technical sophistication to micro-target users, but claims it cannot detect obvious fraud patterns. When vulnerable people lose money to these schemes, Meta still takes its cut.

Line 2: Democratic Sovereignty — Silicon Valley’s Capitulation

The New York Times documented in November 2025 how Silicon Valley — Meta chief among them — has systematically capitulated to political pressure, abandoning any pretense of platform neutrality or content integrity.

This isn’t about left versus right. It’s about the fundamental premise that platforms claiming to be neutral public spaces should actually function as neutral public spaces. When platforms algorithmically amplify content based on engagement rather than accuracy, when they modify their policies based on political pressure rather than principle, they’ve crossed from being technology companies into being political actors.

Meta’s fact-checking rollbacks, its content policy reversals, its willingness to bend to whoever holds power — these aren’t neutral business decisions. They’re choices that shape public discourse.

Line 3: Safety Governance — Structural Failure

The AI Safety Index found that Meta “lacks any commitments on monitoring and control despite having risk-management frameworks, and has not presented evidence that they invest more than minimally in safety research.”

The report’s recommendation to Meta is telling: “Strengthen internal safety governance by establishing empowered oversight bodies, transparent whistleblower protections, and clearer decision-making authority for development and deployment safeguards.”

Translation: The governance structures that should prevent harm don’t exist in any meaningful form.

Line 4: Research and Accountability — The Open Source Excuse

Meta has positioned itself as the champion of “open source” AI through its Llama models. The AI Safety Index reveals the darker side of this strategy.

On Protecting Safeguards from Fine-tuning — the measure of whether companies prevent safety mechanisms from being stripped out by bad actors — Meta’s approach of releasing full model weights means users can “directly modify parameters and potentially disable all protections.”

Open source is valuable. But open source without accountability isn’t democracy — it’s abdication.

Line 5: Information Transparency — The Silence

Perhaps most damning: Meta refused to engage with the AI Safety Index at all. While Anthropic, OpenAI, Google DeepMind, and even xAI and Z.ai submitted survey responses to provide additional transparency, Meta stayed silent.

The report notes: “Meta did not return a request for comment on the study.”

When you’re building AI systems that will shape the future of humanity, silence isn’t a strategy. It’s an admission.


The Climbing Partner Test

I think about this through the lens of climbing — my other life, the one where trust isn’t theoretical.

When I’m thirty meters up a limestone face, my life depends on my belayer. Not their intentions. Not their marketing. Their actual behaviour in the moment the rope goes tight.

Climbing culture enforces its lines ruthlessly. A belayer who looks away at the wrong moment? They’re done. Not because climbers are cruel, but because the stakes are existential.

The AI safety researchers reviewing these companies are applying the same standard. They’re asking: When your AI system fails — and complex systems always eventually fail — what catches the fall?

Meta’s answer, according to every metric available, is: nothing meaningful.


What This Means for You

If you use Facebook, Instagram, or WhatsApp, you are a user of systems built by a company that:

  • Earns $16 billion annually from fraudulent advertising
  • Scores the same on AI safety as Chinese companies operating under authoritarian oversight
  • Has no credible plan to prevent catastrophic AI risks
  • Refuses to engage with independent safety evaluations
  • Releases AI models that can be weaponized by removing their safety features

The question isn’t whether Meta will continue to cross lines. The evidence suggests that’s baked into their business model.

The question is whether we’ll do anything about it.


The Lesson of John the Savage

This brings me back to Huxley. In Brave New World, John the Savage sees through the system’s manipulation. He rejects soma, rejects the shallow pleasures, demands the right to be unhappy. He’s the hero, right?

But John fails. He ends up isolated, raging, and ultimately destroys himself.

Huxley understood something crucial: rejection alone isn’t enough. Pure opposition, without building alternatives, leads to isolation and impotence. John could only say no. He could only reject, isolate, and rage.

We must learn from his mistake. We must say yes — yes to real human connection, yes to protecting those we love, yes to the difficult work of being present, yes to building alternatives.

Brian Acton didn’t just walk away from $850 million out of rage. He walked away out of love for what he had originally created and grief for what it had become. He then poured his resources into building Signal not to destroy Facebook, but to preserve what he loved about human communication: privacy, dignity, genuine connection.

Love is the force that drives everything worth doing. Not the performative love of social media posts, but the messy, difficult, real love that involves showing up, being present, being vulnerable, being seen.


The Empty Space We Need to Create

The careless people — both Fitzgerald’s and ours — count on our apathy. They count on the network effects being too strong, the convenience too seductive. They count on us not knowing that even WhatsApp’s founder ran away in horror and built something better.

But they also count on us becoming isolated, angry, irrelevant hermits if we do resist. They count on us choosing rage over love, separation over connection, purity over effectiveness.

We must prove them wrong on both counts.

Every movement starts with individuals making uncomfortable choices out of love, and then helping others understand — not through preaching like John, but through presence and genuine connection — why those choices matter.

What I’m actually doing this week:

  • Emptying my Instagram and Facebook accounts — not deleting yet, but creating that empty space
  • Revoking every permission Meta apps have on my devices (location, contacts, microphone — all of it)
  • Migrating my business contacts to Signal, one conversation at a time
  • Taking my daughters climbing and skiing. No phones. Just presence, trust, and the real kind of connection that doesn’t need likes

Brian Acton gave up $850 million because he understood what was at stake. We’re not asking you to give up nearly that much — just some digital convenience and the illusion of connection that’s actually making you and your children mentally ill.

And we’re asking you to fill that space not with isolation, but with the real, messy, beautiful human connections that make life worth living.


A Note on Strategic Options

I recognise that simply saying “delete everything” isn’t a complete answer. I’m working on a companion piece exploring the different strategic approaches: complete deletion, transformation from within, hybrid methods, and regulatory pressure.

Because changing this trajectory requires more than individual action — it requires a movement. And movements start with the courage to create empty space where harm used to live.

Even if that means walking away from $850 million. Even if it means being inconvenienced. Even if it means being the first one to leave.

Especially if it means being the first one to leave.

Because empty space has to start somewhere.


Fabrizio Deliberali is a strategic partnerships advisor and entrepreneur operating across environmental intelligence, ESG initiatives, and ethical technology frameworks. He is the CEO and co-founder of Smart Mountains and the founder of Fab Campaigns. When he’s not calling out tech giants, he’s leading mountain bike expeditions in the UK and the Alps or teaching his friends and daughters to climb.

These are his personal views, backed by extensive documentation and a professional commitment to building technology that serves rather than exploits.

Join the migration: Signal for messages. Ethical platforms for social. Reality for connection.


Sources and References

  1. Sarah Wynn-Williams, Careless People (2025)
    • Insider account of Facebook’s toxic culture
    • Title borrowed from F. Scott Fitzgerald’s The Great Gatsby
  2. Aldous Huxley, Brave New World (1932)
    • The original warning about technological control through pleasure
  3. Future of Life Institute AI Safety Index Winter 2025 (December 2025)
    • Full Report: https://futureoflife.org/ai-safety-index-winter-2025/
    • Meta scored D (1.10), same tier as Chinese companies DeepSeek and Alibaba Cloud
    • Independent reviewers: Prof. Stuart Russell (UC Berkeley), Prof. Tegan Maharaj (HEC Montréal), Prof. Yi Zeng (Chinese Academy of Sciences)
  4. Reuters Investigation: Meta’s Fraudulent Ads (November 2025)
  5. New York Times: Silicon Valley’s Political Capitulation (November 2025)
  6. Brian Acton’s departure from WhatsApp/Facebook
    • Walked away from $850 million in unvested stock
    • Invested $50 million to build Signal
    • Revealed coaching to mislead EU regulators

#EthicalTech #MetaCrossedTheLine #FabsFridayFieldNotes #AISafety #CarelessPeople