Uncle Bob vs. Grady Booch: Rethinking Code Reviews in the Age of AI

In response to a question about the feasibility of effective code reviews for large (e.g., 500-line) AI-generated PRs like those from Claude, especially when reviewers lack deep codebase familiarity in new projects or fast-paced environments, Uncle Bob Martin and Grady Booch have contrasting views

Uncle Bob Martin advocates metrics-based oversight (test coverage, complexity, dependencies) and higher-level management over line-by-line AI code review, while Grady Booch stresses manual verification for vulnerabilities, dead code, and performance factors.

Uncle Bob Martin: "I don’t review code written by agents. I measure things like test coverage, dependency structure, cyclomatic complexity, module sizes, mutation testing, etc. 

Much can be inferred about the quality of the code from those metrics. The code itself I leave to the AI. 

Humans are slow at code. To get productivity we humans need to disengage from code and manage from a higher level."

Grady Booch: "Unlike Bob, I review all code generated by agents.

Test coverage and similar metrics will give me confidence of functionality, but they offer me no confidence whatsoever that those agents have not introduced vulnerabilities, that they have not introduced dead code that will diminish understandability in the future, that they have missed factorizations that would have significant impact upon performance.

Trust but verify.

As an experienced developer, I know the smell of what is good and what is not.

And no agent has either the experience or the context to know those things.

If you want to be sloppy and fast then I suggest you proceed with Bob’s advice."


Comments

Popular posts from this blog

10 Tips to Avoid Claude Usage Limits

This Week I Learned - Week 14 2026