Posts

Showing posts from April, 2026

Beyond Booking: The Rise of Intelligent Tools for Indian Train Travel

Image
Trains are the lifeline of India. From the humble beginnings of online ticket booking (which was an absolute game-changer), we’ve come a long way. What started as simple add-ons like checking waitlist status or seeing which stations fall on your travel route has now exploded into a full-blown ecosystem of smart tools and services. Today, you can track trains in real-time , check for seat availability , predict delays, analyze patterns, and make travel decisions like a pro. Indian railways data has truly become everyone’s playground. Here’s a list of some of the other cool software tools and services that are making train travel smarter, smoother, and way more fun: Trains that run between Stations or Nearby Stations when direct connections are not available Seat Availability Calendar Train Chart Vacancy Train Schedule Coach Position Train coach Seat layout All Trains departing from a specified station Live Station -  Trains available at a specified station for the next 2 and 4 ...

Google Maps Presents: Traffic Trauma Live

Image
Cartoon co-created with Copilot. See more of my AI co-creations Google Maps officially rolled out AI-powered audio notifications for accident-prone areas (Sample: "Accident prone area for the next 500 meters") and proactive congestion alerts  ("Congestion 1km ahead. You're on the best route") in India in November, 2025. While basic traffic alerts have been around for years, this specific update introduced "Accident-Prone Area Alerts" and "Proactive Traffic Alerts" as part of a major integration of Gemini AI into the app. Maps also displays official speed limits beside the in-app speedometer. India was a pioneer market for these specific AI-driven safety features. The proactive and accident-specific audio notifications were part of a specialized " India-First " push. What was "India-First"? Accident-Prone Area Alerts: This feature was developed specifically for the Indian road context. Google collaborated directly with loca...

This Week I Learned - Week 16 2026

This Week I Learned -  * From The Batch - - Alignment training teaches LLMs to behave like assistants, but it tethers them to that behavior only loosely. Beyond alignment training, system prompts act as behavioral guardrails, but motivated users can bypass them.  - The lengths of tasks completed autonomously by AI agents have doubled roughly every seven months, according to METR, an independent testing organization - LLMs' knowledge is still relatively limited with respect to infrastructure and the complex tradeoffs good engineers must make...finding infrastructure bugs — say, a subtle network misconfiguration — can be incredibly difficult and requires deep engineering expertise.  - Research involves thinking through new ideas, formulating hypotheses, running experiments, interpreting them to potentially modify the hypotheses, and iterating until we reach conclusions. Coding agents can speed up the pace at which we can write research code. - Meta has pivoted from its ope...

A Guide to Spec-Driven Development: Plan, Implement, and Validate

Image
The DeepLearning.AI short course on  Spec-Driven Development with Coding Agents  introduces a professional paradigm shift in how we build applications using agentic coding assistance.  Rather than "vibe coding"—where developers rely on quick, high-level prompts that often lead to technical debt—SDD brings engineering rigour back to the process.  Agent is the muscle, but the SPEC is the brain - In this workflow, the human takes on the role of a senior architect, providing the "blueprints" (specifications), while the AI agent acts as the "muscle" to implement those designs.   The 1 Hour 20 Minute video course has 15 short lessons - Introduction - 4 mins Why spec−driven development? - 4 mins Workflow overview - 3 mins Set up your environment - 5 mins Setup - 1 min Creating the constitution - 10 mins Feature specification - 3 mins Feature implementation - 1 min Feature validation -...

GitHub Markdown Keyboard Shortcuts

Image
GitHub's web editor (used in issues, pull requests, comments, Markdown files, etc.) has built-in keyboard shortcuts that work in the standard Markdown editing mode - Keyboard Shortcut Description Ctrl + B Inserts Markdown formatting for bold text Ctrl + I Inserts Markdown formatting for italic text Ctrl + E Inserts Markdown formatting for inline code (one-liner) Ctrl + K Inserts Markdown formatting for a link Ctrl + Shift + 7 Inserts Markdown formatting for an ordered (numbered) list Ctrl + Shift + 8 Inserts Markdown formatting for an unordered (bullet) list Ctrl + Shift + . Inserts Markdown formatting for a blockquote Ctrl + Shift + P Toggles between Write and Preview tabs (in comments, issues, PRs, or file editor) Ctrl + Enter Submits the comment/form (in issue/PR/comment fields) Ctrl + V (with text selected) ...

This Week I Learned - Week 15 2026

Image
This Week I Learned -  *  Potato Prompt  used in Custom Instructions can make AI stop being a collaborator and start being a "Devil’s Advocate" -  " Whenever I type the word 'Potato' followed by an idea or argument, I want you to ignore your 'helpful' persona. Instead, act as a Hostile Critic . Your only job is to find the 'holes' in my logic. Point out three specific ways my argument could fail, two assumptions I’m making without proof, and one counter-argument I haven't addressed. Do not be polite; be precise. " * Yann LeCun and his team have developed LeWorldModel, the first stable model built with his Joint Embedding Predictive Architecture (JEPA). Their aim is to create models that go beyond just predicting words, focusing instead on truly understanding the world and how it functions. * Distillation is a technique where an older “teacher” AI model is used to train a newer, “student,” model that replicates the capabilities of the earl...

A ₹40 Fix That Brought My PC Back to Life

Image
Of all the computing devices I use, my custom-built desktop PC is the most comfortable to work on, and it offers great flexibility for upgrades . I collaborated with a hardware expert to have it configured some seven years ago. After all these years, the time on that Windows PC kept going out of sync and behaved erratically. My AI assistant informed me that it was because of the CR2032 CMOS battery dying. A few years ago, I wouldn't have dared to meddle with the hardware but AI assistants & DIY YouTube videos gave me the courage to experiment. I learnt about how to replace a CR 2032 battery, bought it at a neighborhood hardware store for Rs 40 and replaced it in 5 minutes giving my PC a new lease of life.   YouTube video on how to change CR2032 batteries on the most common styles of battery sockets. The socket of my PC's motherboards is described at the 2:43 mark .   I now open up the PC cabinet from time to time to clean the insides, admire the components that kee...

Uncle Bob vs. Grady Booch: Rethinking Code Reviews in the Age of AI

Image
In response to a question about the feasibility of effective code reviews for large (e.g., 500-line) AI-generated PRs like those from Claude, especially when reviewers lack deep codebase familiarity in new projects or fast-paced environments, Uncle Bob Martin and Grady Booch have contrasting views Uncle Bob Martin advocates metrics-based oversight (test coverage, complexity, dependencies) and higher-level management over line-by-line AI code review, while Grady Booch stresses manual verification for vulnerabilities, dead code, and performance factors. Uncle Bob Martin : " I don’t review code written by agents . I measure things like test coverage, dependency structure, cyclomatic complexity, module sizes, mutation testing, etc.  Much can be inferred about the quality of the code from those metrics. The code itself I leave to the AI.  Humans are slow at code. To get productivity we humans need to disengage from code and manage from a higher level." Grady Booch : "Unlike B...

10 Tips to Avoid Claude Usage Limits

Image
Summary of an article on X by kaize - 1. Edit your prompt. Don't send a follow-up because Token cost per message = all previous messages + your new one. 2. Start a fresh chat every 15–20 messages: When a chat gets long → ask Claude to summarize everything → copy it → new chat → paste as first message. 3. Batch your questions into one message for fewer context reloads (that cost tokens). 4. Upload recurring files to Projects - Cached project content doesn't eat into your usage. 5. Set up Memory & User Preferences - Save your role, communication style, and settings once. Claude will automatically apply them to every new chat. 6. Turn off features (like Web search, Connectors, "Explore" mode, "Search and Tools", "Advanced Thinking") you're not actively using. 7. Use Haiku for simple tasks 8. Spread your work across the day as Claude uses a rolling 5-hour window.  9. Work during off-peak hours. Anything outside of peak hours: 5:00 AM to 11:00 ...

This Week I Learned - Week 14 2026

Image
This Week I Learned -  * Since A.I. coding tools from Anthropic, OpenAI, Cursor and other companies took off last year, one result has now become apparent: code overload . - NYT * GitHub platform activity is surging. There were 1 billion commits in 2025. Now, it's 275 million per week, on pace for 14 billion this year if growth remains linear — Kyle Daigle, COO, GitHub * At tech companies like Meta and Shopify, managers have started to factor A.I. use into performance reviews, rewarding workers who make heavy use of A.I. tools and chastening those who don’t. It has created an expensive new status game, known as “tokenmaxxing,” among A.I.-obsessed workers who are desperate to prove how productive they are. - NYT * OpenAI's agentic coding tool, Codex, had tripled its weekly active users since the start of the year. Overall Codex use, measured in tokens, has increased fivefold. Google's A.I. models processed more than 1.3 quadrillion tokens a month in 2025. * AI companies char...

Before AI Made Small Teams Cool: WhatsApp's Efficiency Playbook

Image
Gergely Orosz chats with Jean Lee, who joined WhatsApp as its 19th engineer when it was still a small company with barely any formal processes. She played a key role in scaling it to hundreds of millions of users, experienced the $19B acquisition by Facebook, and later continued her career at Meta. Here are the standout sound bytes and interesting facts from the talk : Why Jean got into tech: "After talking to a lot of adults, I realized people who are in tech were the only ones who were really excited about their jobs. So in Silicon Valley, when you ask people like tell me about your work, people are often very hopeful for the future and very proud of what they're building. Compared to many other adults that I spoke with, they were not so encouraging. They're like, "Oh, don't become an architect. Don't become a designer."" On process: " We didn't have code reviews... The only time I got my code reviewed was the first time I made a commit....

100% Secure. 0% Accessible.

Image
Cartoon co-created with ChatGPT. See more of my AI co-creations

21–32% of Cloud Spend Wasted – The Case for FinOps

Image
Industry reports put average cloud waste between 21% and 32% of their cloud budgets, with some analyses showing even higher figures as AI workloads drive unexpected spikes in spending. This “cloud shock” happens because cloud usage is elastic and decentralized, yet traditional finance and IT processes treat it like fixed capital expenditure. The result: over-provisioned resources, idle instances, forgotten workloads, and poor alignment between engineering speed and business value. Cloud cost management is the process of tracking, optimizing and managing cloud computing costs. Cloud cost management and FinOps (Financial Operations) are terms often used interchangeably, but there are some key differences between them. Cloud Cost Optimization narrows its focus on reducing expenses. In contrast, FinOps casts a broader net, encompassing not only cost optimization but also financial management aspects like budgeting, forecasting, and insightful reporting.  What is FinOps? FinO...

This Week I Learned - Week 13 2026

Image
This Week I Learned -  * Anthropic's Claude Code, a closed-source AI coding CLI tool, leaked ~512,000 lines of TypeScript source code on March 31, 2026, via an exposed source map in its npm package, revealing internal architecture, 44 feature flags, and 20 unreleased features; the company responded with DMCA takedowns on original copies. A developer quickly rewrote the codebase in Python using OpenAI's Codex, creating a functional derivative hosted on GitHub that evades copyright claims, amassing 29k stars and 40k forks in hours as an educational open-source alternative. This incident underscores AI's role in accelerating code replication, challenging traditional IP protections for software—Anthropic may overlook enforcement to avoid precedents that could restrict LLM training or generation of similar derived works. *  Andrej Karpathy compares LLMs to probabilistic CPUs that handle tokens statistically, in contrast to the traditional deterministic computation based on byte...