This Week I Learned - * SLMs are generally under 12B parameters and can outperform larger models for specific agentic-related tasks like RAG, tool calling, structured decoding, and programmatic tool use. * A new study by international researchers finds leading AI models are about 50% more sycophantic than humans, affirming users’ actions even when they involve manipulation or harm. Carried out by researchers from Stanford University and Carnegie Mellon University, it introduces the term “social sycophancy” – a form of AI behaviour that flatters a person’s selfimage or actions instead of being factual. This kind of subtle affirmation, experts argue, poses deeper psychological and social risks than mere factual errors. Across 11 widely-used large language models (LLMs) — including those from OpenAI, Anthropic, Google, Meta and Mistral — researchers found that AI systems consistently validated user behaviour more readily than human advisers. When presented with moral or relatio...
Comments
Post a Comment