Tech Rivals Using Each Other's Code
Today's AI coding tools look like fierce competitors, but they're all built on each other's work.
Today’s AI-powered IDEs like Cursor, Windsurf, Kiro and many others all stand on the giant shoulders of open-source work contributed by companies that once battled each other in the browser wars.
Windsurf, for example, is a fork of Microsoft’s VS Code. VS Code itself runs on Electron, which is built on top of Chromium and Node.js. Chromium comes with Google’s V8 JavaScript engine and uses Blink, a fork of Apple’s WebKit.
So even when Google, Microsoft, GitHub, and Apple compete publicly, their code ends up powering each other’s tools behind the scenes. Open source has turned “tech rivals” into “reluctant collaborators,” whether they planned it or not.
The AI Model Dependencies
The same pattern exists with AI models. Competing IDEs use each other's models, hosted on each other's clouds.
GitHub Copilot (Microsoft) uses:
- OpenAI models (GPT-4.1, GPT-5 family) - hosted on Azure and OpenAI
- Anthropic Claude models (Haiku 4.5, Sonnet 3.5/4/4.5, Opus 4.1) - hosted on AWS, Google Cloud, and Anthropic
- Google Gemini 2.5 Pro - hosted on Google Cloud
- xAI Grok Code Fast 1 - hosted on xAI
| Models in the Copilot Chat model picker as of November 21st, 2025. Raptor mini is fine-tuned GPT-5 mini |
- OpenAI models
- Anthropic Claude models (3.5 Sonnet, 4 Sonnet, 4.5 Sonnet, 4.1 Opus)
- Google Gemini 2.5 Pro
- xAI Grok models
- DeepSeek models
- Composer (own)
Models are hosted on US, Canada, & Iceland based infrastructure by the model's provider, a trusted partner, or Cursor directly.
Kiro (AWS) which offers AI coding with spec-driven development uses:
- Anthropic Claude models (Sonnet 3.7, Sonnet 4)
- Auto mode with "best in class LLM models"
AWS built an IDE around Anthropic's models, a company they invested billions in but doesn't control, and that also works with their competitors.
The Ironic Details
Microsoft needs Amazon:
Claude Sonnet 3.5 in GitHub Copilot runs exclusively on AWS infrastructure. Microsoft routes through their biggest cloud rival to serve that model.
Everyone shares caching:
GitHub Copilot uses Anthropic's prompt caching across AWS Bedrock, Google Cloud Vertex AI, and Anthropic's own systems. Three competitors' infrastructure stitched into one feature.
AWS competes with its own customer:
Kiro uses Claude models while competing directly with Anthropic's own IDE (Claude Code). AWS both enables and competes with Anthropic.
Why This Works (For Now)
- No single company can build the best models, IDE, and infrastructure simultaneously
- Open source made the foundation layers free to copy
- Developers can switch tools easily, so lock-in doesn't work
- The real competition is between model makers (OpenAI vs Anthropic vs Google) and clouds (AWS vs Azure vs GCP) - IDEs are just the interface
What Could Break
- If AWS and Anthropic split, Kiro loses its core models
- If Anthropic prioritizes Claude Code, API access for competitors could get expensive
- If OpenAI launches a competitive IDE, will they still serve Cursor at good rates?
- When the market consolidates (and it will), these open arrangements may close
The Bottom Line
Your favorite coding IDE probably:
- Runs on your second-favorite company's framework
- Uses your third-favorite company's AI models
- Hosted on your fourth-favorite company's cloud
- All built on open source code from a dozen rivals
Tech companies became accidental partners because open source gave them no choice. Developers benefit from the chaos, at least for now.
- Written by Claude, based on my study of AI coding tools & prompt.
Comments
Post a Comment