Anthropic Is Now a US Defence Supply Chain Risk. OpenAI Is Injecting Ads. Here's What That Means for Your AI Stack.
Apr 02, 2026Most people pick an AI tool the way they pick a coffee shop: whatever's closest, whatever they tried first, whatever a colleague mentioned in passing. Then they build months of context there: chat history, uploaded files, workflows, preferences the model has learned over time. They don't think much about what sits underneath until it breaks.
Two developments in the last few weeks deserve more attention than they've received.
First, Anthropic: the company behind Claude, which is arguably the strongest model on the market right now for creative and nuanced work, has been flagged as a supply chain risk by the US Department of Defense. That means anyone connected to the DoD supply chain, even indirectly, now faces restrictions on using Claude. If you're a contractor, a subcontractor, or a vendor to either, this is an operational concern, effective now.
Second, OpenAI has begun experimenting with advertising in the US. If you use ChatGPT, your conversations may start to carry commercial weight: not just for you, but for whoever is paying for placement. The incentive structure shifts the moment ads enter the picture. The model's job is no longer purely to help you think; it's also to surface something someone else paid for. That's a meaningful change to the relationship between you and the tool, even if the ads are initially subtle.
Neither of these developments is catastrophic on its own. But together, they illustrate something important: the major AI providers are accumulating regulatory, commercial, and geopolitical baggage at a pace that most users aren't tracking.
And if you've built your working life inside one of these platforms: your files, your context, your history: you're more exposed than you think.
What happened on Friday when Claude went down.
My Co-Founder and I we were working on Friday when we noticed Claude went down. The status page said operational. The model said otherwise: it sat there, spinning, unable to return a response. This happens with some regularity at the moment; Claude has been in and out for stretches of minutes at a time, often across several hours.
We kept working though, even though we do currently use Claude in the chat feature within our platform. This was because Virtually Myself® has automatic failover: cascading models that step in when a provider drops out. Claude went down, the next best available model for the task picked up, and the conversation continued with all of the context intact.
No interruption. No loss of files or chat history. No starting over.
Single-model dependency is a business risk
That seamlessness reflects a broader principle in how Virtually Myself® is built: we use the best available model for each task at all times. Claude Opus 4.6 is the best model on the market for writing. Perplexity is the best for research. Deepgram is the best for transcription.
Rather than locking you into a single provider, the platform routes to whichever model will give you the strongest result for what you're actually doing, and automatic failover means that if any one of those providers drops out, your work continues without interruption.
That's useful on a minute-to-minute basis when a provider is intermittent. But the bigger point goes well beyond five-minute outages. It's about what happens when the disruption is permanent.
What happens if you can't use Anthropic anymore because your client sits in the DoD supply chain?
What happens if OpenAI's ad model evolves in a direction you're not comfortable with?
What happens if a provider changes its pricing, its terms, its data handling, or its availability in your region?
If your context lives inside that provider: and only inside that provider: you're starting from zero. Every conversation, every file, every preference the model has built up about how you work: gone. For anyone who uses AI seriously, that's a material setback.
Reliability isn't a feature of any one model
The lesson here isn't "pick the right provider." No single provider will stay the right choice indefinitely. Models improve and regress. Companies get acquired, regulated, restricted, or simply change direction. The competitive landscape shifts every quarter.
What matters, then, is not which model you're using today but whether you can move between models without losing what you've built. Portability of context, files, and chat history is the thing that protects you: from outages, from regulatory shifts, from commercial decisions made in a boardroom you'll never see.
This is the architecture we built into Virtually Myself®, and moments like these are exactly why.
When Claude drops out, your work doesn't. When a provider becomes unavailable for regulatory reasons, your history doesn't disappear with it. The platform holds your context independently of any single model, so the model serves you rather than the other way around.
Virtually Myself® is a multi-model AI platform built for professionals and teams who use AI as a core part of how they work. It gives you access to the best available models with built-in redundancy, so your context, files, and chat history persist regardless of what happens to any single provider. All data is stored in Australia, which for Australian-based users meaningfully reduces geopolitical risk and exposure to foreign government access. That matters for everyone, and it matters especially if you're working with sensitive, confidential, or classified documents, as some of our clients do.
The question worth asking before you need the answer
Reflecting on where we are right now, AI is entering a phase where the technology itself is extraordinary but the structures around it are becoming increasingly unpredictable.
Regulations will tighten in ways no one can fully anticipate. Business models will shift as investors demand returns. Geopolitical tensions will reshape who can use what, and where.
Thriving through all of that means building your practice on something bigger than whichever model looked best in 2025. A foundation that doesn't depend on any single model staying available, staying neutral, or staying the same.
The most important AI decision you make this year probably won't be which tool you choose. It will be whether what you build with it belongs to you when the landscape shifts underneath it.
And it will shift. The only question is whether you're ready when it does.
About the Author:
Nina Christian is co-founder of Virtually Myself®, an Australian-built multi-model AI platform designed to help experts capture, protect, and scale their intellectual property. A marketing strategist and personal branding expert with over 25 years' experience, she is the author of Marketing Me and Solar System Marketing, and a Fellow and Life Member of the Australian Marketing Institute. She writes about the intersection of AI, voice, and thought leadership, helping experts make sense of what's changing and stay ahead of it.