Four Signals From Anthropic That Should Change How You Think About Partnerships


Feature image: The London Edition by Skyline Chess — London’s architectural icons reimagined as chess pieces. Founded by Chris Prosser and Ian Flood, two London-based architectural designers, turning the city’s skyline into a game of strategy.


April 2026


In the space of ten days, Anthropic sent four signals that, taken together, tell you more about where AI partnerships are heading than any strategy deck or keynote.

The first was technical. The second was governance. The third was commercial. The fourth was geographic. None of them should be read in isolation — and all of them land differently depending on whether you’re looking at the global chessboard, the UK regulatory landscape, or your own venture.

Let me unpack all four.


Signal 1: The AISI Evaluation — When Your AI Can Hack a Corporate Network End-to-End

On 13 April, the UK’s AI Security Institute published its evaluation of Claude Mythos Preview’s cyber capabilities. The headline number is striking enough: on expert-level capture-the-flag challenges — tasks no AI model could complete before April 2025 — Mythos Preview now succeeds 73% of the time.

But the real finding is deeper. AISI built a 32-step corporate network attack simulation called “The Last Ones” — from initial reconnaissance through to full network takeover, a workflow they estimate takes human experts around 20 hours. Mythos Preview is the first model to complete it from start to finish, doing so in 3 out of 10 attempts. The next-best model, Claude Opus 4.6, averaged 16 of the 32 steps.

AISI is careful with its caveats: the test environment had no active defenders, no endpoint detection, no consequences for tripping alerts. They cannot confirm whether Mythos could breach a properly hardened enterprise network. But the trajectory is the point. Two years ago, the best models could barely handle beginner-level tasks. Now one of them can autonomously chain 32 steps into a complete intrusion.

Meanwhile, the Council on Foreign Relations published a piece calling Mythos an inflection point for global security, noting that Anthropic had restricted the full model from public release — a first — because of its capacity to independently discover thousands of zero-day vulnerabilities in software infrastructure believed to be among the most secure in existence.

The Bank of England’s Cross Market Operational Resilience Group (CMorg) is convening a briefing for UK bank and insurance chief executives within the fortnight. Goldman Sachs confirmed it is already working with Anthropic on defences. This is no longer a theoretical conversation.

Sources: AISI Blog Post · Help Net Security · CFR Analysis · CyberScoop


Signal 2: Novartis CEO Joins Anthropic’s Board — Governance as Strategy

On 14 April, Anthropic announced the appointment of Vas Narasimhan, CEO of Novartis, to its Board of Directors. Narasimhan was appointed through the company’s Long-Term Benefit Trust — the independent body with no financial stake in Anthropic that exists to keep governance aligned with both shareholder interests and the company’s public benefit mission.

This is the second board appointment through the Trust in two months (Chris Liddell, former Microsoft executive and Trump White House staffer, joined in February), and it tips the balance: Trust-appointed directors now hold a majority of board seats.

Read that again. At a company valued at $380 billion, with revenue run-rate past $30 billion, an independent governance body with no financial stake now holds the board majority. That is a structural choice with enormous implications for how Anthropic navigates the tension between growth and mission — particularly as it reportedly weighs an IPO this year.

Narasimhan’s appointment is not decorative. He has overseen the development and approval of more than 35 novel medicines. As Daniela Amodei put it: getting powerful new technology to people safely and at scale is what Anthropic thinks about every day, and Narasimhan has been doing exactly that for years. Healthcare and life sciences are among AI’s highest-stakes regulated domains — precisely the territory where the gap between “impressive demo” and “safe deployment at scale” is widest.

The timing is deliberate. Anthropic is in open legal conflict with the Pentagon over its refusal to allow Claude to be used for autonomous weapons or domestic mass surveillance — a dispute that led to an unprecedented supply chain risk designation, a federal ban, and ongoing litigation. Adding a pharma CEO through the Benefit Trust, while being blacklisted by the Department of Defense for holding ethical red lines, is a governance statement that says: we are building our board for the long game, not the next news cycle.

Sources: Reuters/Yahoo Finance · Pharmaphorum · SWI/Swissinfo · IT Brief


Signal 3: The Coefficient Bio Acquisition — People, Not Just Models

In early April, Anthropic acquired Coefficient Bio, a stealth biotech AI startup, for $400 million in stock. The team is roughly 10 people. They were working on AI models for biological research — pursuing, in their own framing, artificial superintelligence for science. They join Anthropic’s Health Care Life Sciences team.

$400 million for 10 people. The investor, Dimension, reported a 38,513% IRR on the deal.

This follows Anthropic’s launch of Claude for Life Sciences (October 2025) and Claude for Healthcare (January 2026). Eric Kauderer-Abrams, Anthropic’s head of biology and life sciences, has described healthcare and life sciences as one of the company’s largest strategic bets — spanning everything from early-stage discovery through translation and commercialisation.

If you’ve been following this newsletter, the pattern is familiar. When I wrote about the Coefficient Bio acquisition in Week 13, I made the point that these deals tell you where the real constraint is: not compute, not data, not even the model architecture. People. Integration. The ability to bring a small team’s deep domain knowledge into a larger system without destroying what made them valuable. It is the same challenge I see every week in my own work — whether that’s building the Smart Mountains team across borders, structuring the Imprese Favolose ecosystem, or advising Centro Consorzi on leadership transition.

The acquisition signal, combined with the Narasimhan board appointment, makes Anthropic’s strategic intent unmistakable: healthcare and life sciences is the next frontier where AI capability meets regulated complexity, and they are assembling the people and governance to play that game seriously.

Sources: TechCrunch · Fierce Biotech · Eric Newcomer


Signal 4: London, Not Brussels — Why Both Frontier Labs Chose the Same City

Today — the day I’m writing this — Anthropic announced it is taking office space in London’s Knowledge Quarter for up to 800 people, quadrupling its current 200-strong UK team. This comes days after OpenAI unveiled plans for its first permanent London office, a 544-seat hub due to open in 2027, which it calls its largest research centre outside the United States.

Two frontier AI labs. Same city. Same neighbourhood. Same week.

The Knowledge Quarter — King’s Cross and the streets around it — is now home to Anthropic, OpenAI, Google DeepMind, Meta, Synthesia, Wayve, and a growing cluster of AI companies. It has become, arguably, the most concentrated AI research district outside San Francisco.

The question worth asking is: why London, and not an EU capital?

Paris has Mistral. Berlin has a deep engineering base. Dublin offers tax efficiency and EU single market access. Brussels is where the regulators sit. Yet both Anthropic and OpenAI have chosen London as their primary European base — and the reasons tell you something important about what frontier AI companies actually optimise for.

Talent density and language. London has roughly 120,000 software engineers and the deepest pool of AI/ML venture capital in Europe — over €26.5 billion in AI/ML VC over the past decade. English as a working language matters: it reduces friction in hiring globally, integrating distributed teams, and engaging with US headquarters. For companies whose primary research language is English, this is not trivial.

Regulatory positioning — close to the EU, but not inside it. Post-Brexit, the UK sits in an unusual position: close enough to EU markets for commercial access (Eurostar from King’s Cross to Brussels takes two hours), but outside the EU AI Act’s direct jurisdiction. The UK’s approach to AI regulation has been deliberately lighter-touch — sector-specific, principles-based, working through existing regulators rather than creating a new horizontal framework. For companies like Anthropic and OpenAI, which are simultaneously pushing capability boundaries and navigating safety concerns, this regulatory stance offers room to move that the EU AI Act’s risk-classification approach does not.

This is not about avoiding regulation. Anthropic, of all companies, has demonstrated it will hold ethical red lines even against the US Department of Defense. It is about choosing a regulatory environment that understands AI safety deeply (AISI, NCSC, the Bletchley Summit legacy) without imposing the compliance architecture of the EU AI Act before the technology and its risks are fully understood.

The political courtship. The UK government has been actively wooing Anthropic. The Financial Times reported that the Department for Science, Innovation and Technology drafted proposals including office expansion and a potential dual stock listing, with backing from Downing Street. Dario Amodei is visiting Britain in late May on a European customer and policy tour. Former PM Rishi Sunak joined Anthropic as a senior adviser last year. The UK government is already using Claude in GOV.UK Chat, a public-sector deployment built on Anthropic’s models through AWS Bedrock.

The honest subtext, acknowledged privately by UK officials, is that Britain has no homegrown frontier lab. The strategy is partnership, not competition — tie the best US labs to UK infrastructure, research, and talent before other European capitals do. For Anthropic, facing an unprecedented legal battle with the US government, the UK’s courtship offers something valuable: a major democratic ally that wants its technology and shares its safety commitments, without the political hostility it faces in Washington.

What the EU loses. The flip side is real. Every 800-person office in London is an 800-person office that is not in Paris, Berlin, or Amsterdam. The EU AI Act was designed to set global standards, but if the companies building the most capable models choose to headquarter their European operations outside the EU’s jurisdiction, the Act risks becoming a framework that regulates the deployment of AI without meaningfully influencing its development. For European ventures — including those I work with in Italy — this creates a two-speed reality: EU-regulated markets where AI adoption is shaped by compliance requirements, and a UK hub where the frontier labs are writing the next chapter.

Sources: CNBC/Benzinga · Digit · Financial Times via ResultSense · Engadget · Implicator


What This Means: Three Levels

Global

The Anthropic-Pentagon standoff has become the defining case study in AI governance. A company valued at $380 billion, whose model can autonomously complete a 32-step corporate cyberattack, is simultaneously being blacklisted by the US Department of Defense for refusing to allow that same technology to be used for autonomous weapons or mass surveillance. A federal judge called the supply chain risk designation “classic First Amendment retaliation” and blocked it. The appeals court denied Anthropic’s challenge on different grounds. The litigation continues.

Meanwhile, Anthropic has assembled Project Glasswing — a consortium of roughly 40 organisations including Amazon, Apple, Google, JP Morgan, Microsoft, and Nvidia — to use Mythos Preview defensively, preemptively identifying zero-day vulnerabilities at scale. OpenAI, reportedly six months behind on comparable capabilities, is conspicuously excluded.

The global picture is this: the most capable AI model ever built is too powerful to release, its creator is in legal conflict with the world’s largest military over ethical red lines, and the defensive consortium it has built reads like a who’s who of the global economy. The CFR’s Gordon Goldstein is right to call this an inflection point. The question is no longer whether AI reshapes the offence-defence balance in cybersecurity. It is whether defenders can move fast enough to matter.

And while this plays out, the geography of AI power is shifting. Both Anthropic and OpenAI are building their largest European operations in London — not Paris, not Berlin, not Brussels. The EU built the AI Act to set global standards. But if the companies building the most capable models choose to base their European headquarters outside the EU’s jurisdiction, Brussels risks regulating the deployment of AI while having diminishing influence over its development. The UK’s lighter-touch, safety-literate regulatory approach — born from the Bletchley Summit, AISI, and now the CMorg response — is proving more attractive to frontier labs than the EU’s compliance-first framework. That has consequences for every European country trying to position itself in the AI economy.

National (UK)

For the UK, this has been a defining week. The AISI evaluation positions a UK institution at the centre of the global conversation about AI capabilities and risk. CMorg is convening a CEO-level briefing for UK banks and insurers within the fortnight. And now both Anthropic and OpenAI are building major European headquarters in the same London postcode.

The practical message from AISI is almost disarmingly simple: cybersecurity basics still matter. Patching discipline, access controls, hardened configuration, comprehensive logging. AISI explicitly points organisations to the NCSC Cyber Essentials scheme. The most sophisticated AI-driven attack in history is still stopped, for now, by the fundamentals.

But the London expansion story is where UK positioning becomes strategic. The UK government has played this astutely — courting Anthropic with office expansion proposals and dual-listing discussions precisely when the company is in conflict with Washington. The subtext is plain: Britain has no homegrown frontier lab, so the strategy is partnership — tie the best US labs to UK infrastructure, talent, and regulatory culture before Paris, Berlin, or Brussels can.

The choice of London over EU capitals is not incidental. It reflects a real divergence. The EU AI Act imposes a horizontal, risk-classification regulatory framework. The UK has opted for sector-specific, principles-based guidance through existing regulators. For frontier labs navigating both safety commitments and rapid deployment, the UK approach offers more operational flexibility — not less regulation, but different regulation. Add the English-speaking talent pool, the VC depth (€26.5 billion in AI/ML venture capital over the past decade), and the Eurostar proximity to Brussels and Paris, and the logic is clear.

For UK ventures — particularly those operating in regulated sectors like environmental intelligence, healthcare, or financial services — the implication is twofold. First, your AI governance story just became a competitive asset, not a compliance burden. Second, if you operate across the UK-EU border (as I do with Imprese Favolose and Smart Mountains), you are now navigating two distinct AI regulatory regimes, and the gap between them is widening. Understanding that gap is not optional — it is the operating context.

Personal

Four things I take from this week into my own work.

First, governance is not overhead — it is strategy. Anthropic’s Long-Term Benefit Trust model, with Trust-appointed directors now holding the board majority at a $380 billion company, is the most ambitious governance experiment in tech. It is not perfect, and it may not survive the pressures of an IPO. But it demonstrates something I believe deeply: that the structure you build around your venture matters as much as the venture itself. It is why I’ve invested in the FabCube Base conflict-of-interest framework, the C1–C4 classification protocol, and the Red Lines Framework. These are not bureaucratic exercises. They are how you build trust at scale.

Second, the people constraint is real and universal. Anthropic paid $400 million for 10 people. The IRR on that deal was 38,513%. Whether you’re building a life sciences AI platform or an Alpine environmental intelligence venture, the constraint is always the same: finding the right people, integrating them without destroying what makes them valuable, and creating structures that let them do their best work across borders and cultures.

Third, red lines are not weaknesses — they are signals. Anthropic refused to allow its most capable model to be used for autonomous weapons or mass surveillance. The Pentagon blacklisted them. A federal judge called it retaliation. And in the same week, Anthropic’s revenue run-rate passed $30 billion, its customer base doubled, and it appointed a Novartis CEO to its board. Holding your red lines — whether as a company or as a Partnership Architect — is not a constraint on growth. It is the foundation of trust that makes growth possible.

Fourth, geography still matters — even in AI. Both frontier labs chose London. Not a cloud region. Not a virtual hub. An actual neighbourhood with actual desks. For someone who builds partnerships across Winchester, Valtellina, and Belluno, this is a useful corrective to the “work from anywhere” narrative. Rooms matter. Proximity to regulators, talent, and partners matters. And the UK-EU regulatory divergence on AI is now a live strategic variable for anyone operating cross-border ventures in both jurisdictions. I see it every week in the Imprese Favolose structuring work — the stabile organizzazione questions, the withholding tax provisions, the compliance architecture. The London-vs-EU choice that Anthropic and OpenAI just made will ripple down to ventures at every scale.


The signals are loud this week. The question, as always, is what you do with them.


Fab Campaigns Ltd · Partnership Architect · Winchester, UK


Sources & Further Reading:

Feature image: The London Edition by Skyline Chess — London’s architectural icons reimagined as chess pieces. Founded by Chris Prosser and Ian Flood, two London-based architectural designers turning the city’s skyline into a game of strategy.

© 2026 Fabrizio de Liberali. All original content published under CC BY 4.0.
Privacy Policy