This Week I Learned - Week 11 2026

This Week I Learned -

* Meta has acquired Moltbook, the viral "social network for AI agents" where bots (many powered by OpenClaw) interact while humans observe.

* Meta acquired Manus, the AI agent startup for approximately $2 billion. 

* OpenClaw’s creator, Peter Steinberger, has joined OpenAI.

* The OpenClaw project has been moved to an independent open-source foundation to ensure it remains free and accessible under its MIT license.

The Batch -

- The U.S. military uses AWS to run the unclassified version of Anthropic Claude.

- Claude is integrated with Maven Smart System (MSS), a system for targeting and logistics built by Palantir.

- Claude/MSS played a role in the January operation that captured Venezuelan president Nicolás Maduro, but the actions in Iran are its first use in “major war operations.”

- The Qwen3.5 family of open-weights vision-language models includes impressive larger models as well as a smaller one that outperforms an OpenAI open-weights model 10 times its size.

- Vision-language models with reasoning capability that are small enough to run locally means reduced cost, better privacy, and new vistas for vision-language applications.

- Multimodal models typically use different tokenizers to embed different media types, and different encoders when training to generate media rather than classify it. 

- Image generation models use encoders (like VAEs or VQ-VAEs) that preserve visual details (is the cat’s/ball’s surface orange?) but discard semantics (is it a cat or a ball?), and therefore don’t recognize objects as well as classification models. Image classification models, on the other hand, use encoders (like CLIP or SigLIP) that capture types of objects (say, “cat” or “ball”) but miss visual details, so they are worse at generation. 

- A major innovation of large language models is their use of a single tokenizer for all language inputs, whether code, dialogue, tables, or books, etc. This generality eases a model’s ability to transfer knowledge from one data source to another during training: when models get better at understanding or generating text, they get better at code, too. 

- Apple's AToken offers a similar generality for vision models, particularly when it comes to 2D and 3D objects. 

- AToken is a transformer model with an all-purpose visual tokenizer. The new model can both generate and classify images, videos, and 3D, approaching the performance of specialized models for each of these input and output types.

- Chip makers typically examine new models ahead of their release to make sure inference runs efficiently on their hardware. 

- Member nations of the Gulf Cooperation Council, an economic union and military alliance that includes Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and the United Arab Emirates, host 2.0 gigawatts of data center capacity, with an additional 0.4 gigawatts planned.

S Anand's advice  - Record your prompts, run post-mortems, and distill them into SKILLS.md files for reuse.

Human-as-an-Interface -

  • Consultants look at your watch and tell you the time.
  • Coaches make you talk yourself out of your problems.
  • Wealth Managers invest your money worse than index funds.

This happens because:

  • You’re buying convenience. “I don’t have to think or worry about it.”
  • You’re buying insurance. “The consultant said so.” “The auditor signed off.”
  • You’re buying status. “I can afford a butler.”

Also, we’re biased. Costly feels better than cheap (e.g. watches). Action feels better than inaction (e.g. mutual funds). 

Relationships, Strategy, and “Brains” will be the moat in the age of AI.

Datatype is an OpenType variable font that turns simple text expressions into inline charts. No JavaScript, no images, no rendering library — just type the syntax and Datatype's ligature substitution does the rest.

Cities in fiction is an ongoing experiment in creating a literary database to archive real-world places in fiction.

Flavio Copes'  The Valley of Code has useful tutorials on Web development. Long ago, he also compiled a list of fun web app ideas.

The Encyclopedia of Indian Food Ingredient is a foundational component of the Indian Food Informatics Data (IFID) project, conceived as Digital Public Infrastructure (DPI) for the Indian food ecosystem. It provides a standardized taxonomy and multi-format dataset (JSON, Markdown, LaTeX) covering 600+ food components, spanning traditional Ayurvedic botanicals to modern industrial additives.

* This CEEW heat risk map really drives home how extreme heat is slamming India – 57% of districts (home to 76% of the population) are at high to very high risk! 

* Sensex, Nifty crashed 3.26% in biggest fall  since June 2024.

* In India, there are now 149 million people aged 60 or above, making up 10.5% of the population.

* "I think fitness is such a multi-meaning word. For me, it’s about wellness. There is no diet plan. If you are following a diet, then a cheat day is required and that knocks you back... when we purchase things, we do it with a clear intent. And the intent is to consume clean, natural, fresh food. We don’t have any sugar in our house." - Jonty Rhodes

* "The essence of the scientific spirit is to realize what a wonderful world it is that we live in." - Sir C.V. Raman

* "I would say that the truest part of education is to cultivate a love of the beautiful, and if you have not that in you, you are not educated." - Sir C.V. Raman, 1948 Patna University Convocation speech

Comments

Popular posts from this blog

HOW TO add a header or footer to a dynamically generated Word document

Visual tags: Microsoft Tag vs QR Code

HOW TO extract subtitles from a YouTube video