Posts

This Week I Learned - Week 8 2026

Image
This Week I Learned -  *  Brain Moore  prompted nine different AI models to programmatically generate World Clocks to show how some of them struggle with the rendering. Every minute, a new clock is displayed. Each model is allowed 2000 tokens to generate its clock.  *  2025 State of AI   * India’s first sovereign LLM Sarvam AI's 30B-parameter model is pre-trained on 16 trillion tokens and supports a 32,000-token context length, enabling long conversations and agentic workflows while keeping inference costs low due to fewer activated parameters. It is a mixture-of-experts (MoE) model and has just 1 billion activated parameters, meaning that in generating every output token, it only activates 1B parameters. * Alibaba Group's Qwen3.5-397B-A17B is a new open weight multimodal model built to be faster, cheaper, and more agent capable than its predecessors. It combines text, image, and video processing in a single architecture and uses a mixture of experts design...

What's the Ratio for 'Shared Responsibility'?

Image
  Cartoon co-created with Grok. See more of my AI co-creations