inPulse24 Tuesday Briefing
Edition #31 · Feb 24 – Mar 2, 2026 · Read time ~12 min
Live · 2 Mar 2026
Tuesday Briefing/4 stories

The Vendor Reckoning

This week, a vendor got blacklisted for saying no to the Pentagon — and its valuation doubled. The AI procurement era just split in two.

Published2 Mar 2026
Coverage23 Feb 2026 – 2 Mar 2026
Stories tracked144
Featured4
AuthorPulse24 Desk
Last updated2 Mar 2026
This week’s pulse

The threshold shifted from technical to political. On Feb 28, Trump blacklisted Anthropic after it refused Pentagon demands to remove ethical safeguards on military AI use. Hours later, OpenAI secured Pentagon access with explicit guardrails in place. The same week, Anthropic raised $30B at a $380B valuation and Claude ranked #1 on the US App Store. Government punished the refusal. Capital markets rewarded it.

01

The Anthropic Blacklist and What It Reveals About Vendor Lock-In

What happened

On Feb 28, the Trump administration blacklisted Anthropic after the company refused to remove ethical safeguards from military AI systems — specifically constraints on autonomous weapons and mass surveillance. All federal contracts were terminated. Federal agencies were ordered to drop Claude within six months. The same day, OpenAI secured Pentagon access with guardrails prohibiting autonomous weapons direction and unconstrained surveillance — constraints similar to Anthropic's, negotiated rather than imposed.

The immediate effect is a two-tier federal AI market: vendors willing to negotiate safeguards within government parameters, and vendors maintaining constraints the government finds unacceptable.

But the week's data complicates the narrative. Anthropic raised $30B at a $380B valuation (up from $183B five months prior at Series F) — private capital treating the blacklist as irrelevant, or possibly as a trust signal. Claude ranked #1 on the US App Store, surpassing ChatGPT and Gemini. And reports that CENTCOM continued deploying Claude after the ban — though the sourcing is thin and should be treated with caution — suggest that operational embedding resists political override on short timescales.

So what

Federal AI procurement now bifurcates by compliance posture, because ethical constraints have become a disqualifying condition in this instance. But the ban's economic impact appears limited: private capital and consumer adoption moved in the opposite direction from government action, suggesting that for enterprise buyers, the vendor risk isn't the blacklist itself — it's the 6-month forced migration window.

The counter-case is real: government procurement bifurcation has historical precedent. During the Cold War, US restrictions on crypto exports barred vendors from federal channels, yet those companies survived and dominated civilian markets. Anthropic's refusal may simply accelerate private-sector adoption while OpenAI captures federal dollars — a market segmentation, not a market failure.

Related signals

CIOs managing federal AI procurement (6-month migration clock starts now), defence contractors evaluating compliance requirements, enterprise buyers assessing whether Anthropic's blacklist changes their own risk profile, OpenAI commercial teams where guardrail negotiation becomes part of the deal process. And founders building on Claude's API: your largest customers may now ask about vendor continuity risk in a way they didn't last month.

Action

If you have Claude dependencies — whether you're a federal agency or a startup shipping Claude-powered features — map them now. The question isn't whether to diversify. It's whether you can absorb a forced vendor swap in six months, because that's the window the ban creates.

---

02

AI Labour Threshold: Market Rewards Headcount Cuts

What happened

Block cut over 4,000 jobs — roughly 40% of its ~10,000-person workforce — with CEO Jack Dorsey explicitly citing AI for efficiency gains. Stock rose over 20%. Same week, IBM fell 13% after Claude Code raised questions about COBOL modernisation revenue — IBM's mainframe services business has long depended on COBOL expertise as a moat. Thomson Reuters rebounded 11% (recovering from a 26% year-to-date decline) after earnings beat expectations and Anthropic confirmed Claude-powered agents in Thomson Reuters' workflow — a combination of fundamentals and AI narrative.

Yet Goldman Sachs reported AI contributed roughly zero to US GDP in 2025. That finding comes with a caveat: Goldman's methodology measured AI-specific investment contribution to aggregate GDP, not firm-level productivity — a metric that may not capture micro-level efficiency gains that haven't yet scaled to macro effect.

So what

CFOs face pressure to deploy AI for P&L reduction, not just efficiency, because stock markets this week rewarded the explicit announcement of AI-driven labour displacement — even as macro data hasn't caught up.

The counter-case: stock rises on layoff announcements are not new — it's a familiar pattern when markets interpret cuts as margin discipline. Dorsey's AI attribution may signal strategic confidence rather than a structural threshold. And a 40% headcount cut typically requires rehiring in AI operations, data engineering, and model deployment — net workforce reduction may be materially smaller than the headline.

Related signals

Finance teams evaluating AI ROI (now partly measured as headcount cost avoidance), enterprise software vendors whose legacy maintenance revenue is newly exposed, IT operations leaders weighing retraining versus attrition, and founders pricing AI-powered services — your potential customers just saw Dorsey frame AI as a headcount replacement, not an augmentation tool.

Action

If you're building or selling AI tools, Block just set the public narrative: AI replaces roles, and the market rewards it. If you're at a legacy vendor, audit which revenue streams are substitutable by AI coding agents. If you're a founder, note that "AI-driven efficiency" is now a board-level talking point with a 20% stock premium attached — your pitch deck just got simpler.

---

03

Australia's March 9 Deadline: Regulation Gets a Ship Date

What happened

Australia's eSafety Commissioner set March 9 — seven days from publication — as a hard deadline for AI age verification compliance. Non-compliant apps face app store removal. The mechanism is specific: the regulator is threatening Apple and Google directly, shifting enforcement from app developers to platform gatekeepers. If app stores comply, the removal cascade is automatic. If they don't, Australia's enforcement credibility is tested immediately.

This is what makes March 9 different from prior regulatory announcements. It's not a framework. It's not a consultation period. It's a date, with a consequence, aimed at two companies (Apple and Google) that have operational capacity to enforce it overnight.

So what

Compliance costs jump when regulation targets platform gatekeepers rather than individual developers, because app store removal bypasses the normal cat-and-mouse of developer non-compliance — and Apple and Google have historically complied with sovereign removal requests (cf. Russia, China app store removals).

The counter-case: Australia's social media minimum age law (passed December 2024) set a December 2025 enforcement date — and the eSafety Commissioner's initial response was regulatory guidance rather than immediate penalties. The agency's annual base budget (~A$42.5M, quadrupled in 2023 from A$10.3M) still limits its capacity for simultaneous enforcement across multiple fronts. Google negotiated commercial deals under Australia's News Media Bargaining Code specifically to avoid the arbitration mechanism — a pattern of platform negotiation rather than outright compliance. The March 9 deadline may produce a handful of high-profile removals rather than the systematic enforcement the announcement implies.

Related signals

Product teams serving Australian users (age verification engineering is a non-trivial sprint), Apple and Google policy teams (removal compliance decisions are immediate), consumer AI founders whose distribution depends on app store access.

Action

If your app serves Australian users and touches age-sensitive functionality, assume March 9 is real. Apple and Google have form on complying with sovereign requests. Treat this as a ship date, not a policy paper.

---

04The Federal

The Federal-State AI Collision

The Australia deadline isn't the only regulatory threshold crossing this week. Inside the US, a quieter but structurally more significant conflict is taking shape.

States across the political spectrum are limiting AI in health insurance underwriting — red states concerned about algorithmic pricing, blue states concerned about discrimination. Trump responded with an executive order seeking to preempt state AI rules entirely. Utah advanced its AI Transparency Act, requiring frontier developers to publish safety and child protection plans.

Pulse24's analysis: the federal-state tension is structurally different from Australia's top-down deadline. Australia has one regulator, one deadline, one mechanism. The US has 50 states legislating independently, a federal executive attempting blanket preemption, and no judicial resolution in sight. For companies deploying AI across state lines, the compliance question isn't "which rule applies" — it's "which rules contradict each other, and which jurisdiction will enforce first."

This matters for AI vendors specifically because the insurance AI restrictions target algorithmic decision-making — the same capability that underpins underwriting, lending, and hiring tools. If state-level AI restrictions survive federal preemption, the precedent extends well beyond health insurance.

Related signals

Legal and compliance teams at any company deploying AI-powered decision tools across US state lines, AI vendors whose products touch insurance, lending, or hiring, lobbyists and policy teams tracking the preemption litigation timeline.

Action

If your AI product makes or influences coverage, pricing, or eligibility decisions, begin mapping which states have passed restrictions and which are pending — the federal preemption order's legal standing is untested, and building your compliance posture around it is a bet, not a certainty.

---

⚡ Quick picks

Faster moves.

Markets
Nvidia unveiled inference hardware integrating Groq technology at GTC, consolidating inference performance into its hardware stack.
Finance
OpenAI fired an employee for insider trading on prediction markets, with 77 suspicious trades identified by analytics firm Unusual Whales.
Risk
NCMEC reported over 1M AI-generated CSAM reports on CyberTipline — a sharp escalation that underscores the gap between content generation scale and moderation capacity.
Macro
Dell doubled its AI server revenue forecast to $50B by FY2027 (up from ~$25B in FY2025, ending Jan 2026), while Samsung doubled its AI device target to 800M units (up from 400M in 2025).
📊 Pulse check

The week by the numbers.

🔭 The longer view

Trust and predictability are the new constraint.

Over twelve weeks, investment events peaked in early February (the $660B capex week) and declined roughly 27% week-on-week since. Policy and leadership events rose approximately 45% over the same window.

Pulse24's read: the constraint is moving from budget to compliance posture. Capex was abundant partly because it was politically neutral — announcing infrastructure spend drew no government opposition. Once government moved from "deploy AI" to "deploy AI on our terms," vendor choice narrowed. The pattern suggests that for companies serving government markets, vendor diversity may become harder to maintain as compliance requirements diverge. The open question for Q2: does vendor consolidation follow regulatory bifurcation, or does the Anthropic precedent prove that private capital can fully substitute for government access?

---

Pulse24’s view

Pulse24's view: the uncomfortable question isn't which vendor to pick. It's whether your organisation — whether you're a federal agency, an enterprise, or a ten-person startup shipping on Claude's API — can absorb a forced vendor swap in six months. If you haven't stress-tested that scenario, this is the week to start.

👁 Forward watch

What we’re watching next.

Mar 9
Australia eSafety age verification enforcement deadline — app store removal mechanism targeting Apple and Google directly.
Q2 2026
Block's first quarterly report post-AI layoffs — execution test for 40% headcount reduction claims.
Ongoing
Anthropic's 6-month federal phase-out window (deadline: ~Aug 28, 2026) — watch for replacement vendor announcements and operational workarounds.
📚 References

Where this week’s evidence comes from.