Posts

100% Secure. 0% Accessible.

Image
Cartoon co-created with ChatGPT. See more of my AI co-creations

21–32% of Cloud Spend Wasted – The Case for FinOps

Image
Industry reports put average cloud waste between 21% and 32% of their cloud budgets, with some analyses showing even higher figures as AI workloads drive unexpected spikes in spending. This “cloud shock” happens because cloud usage is elastic and decentralized, yet traditional finance and IT processes treat it like fixed capital expenditure. The result: over-provisioned resources, idle instances, forgotten workloads, and poor alignment between engineering speed and business value. Cloud cost management is the process of tracking, optimizing and managing cloud computing costs. Cloud cost management and FinOps (Financial Operations) are terms often used interchangeably, but there are some key differences between them. Cloud Cost Optimization narrows its focus on reducing expenses. In contrast, FinOps casts a broader net, encompassing not only cost optimization but also financial management aspects like budgeting, forecasting, and insightful reporting.  What is FinOps? FinO...

This Week I Learned - Week 13 2026

Image
This Week I Learned -  * Anthropic's Claude Code, a closed-source AI coding CLI tool, leaked ~512,000 lines of TypeScript source code on March 31, 2026, via an exposed source map in its npm package, revealing internal architecture, 44 feature flags, and 20 unreleased features; the company responded with DMCA takedowns on original copies. A developer quickly rewrote the codebase in Python using OpenAI's Codex, creating a functional derivative hosted on GitHub that evades copyright claims, amassing 29k stars and 40k forks in hours as an educational open-source alternative. This incident underscores AI's role in accelerating code replication, challenging traditional IP protections for software—Anthropic may overlook enforcement to avoid precedents that could restrict LLM training or generation of similar derived works. *  Andrej Karpathy compares LLMs to probabilistic CPUs that handle tokens statistically, in contrast to the traditional deterministic computation based on byte...