In response to a question about the feasibility of effective code reviews for large (e.g., 500-line) AI-generated PRs like those from Claude, especially when reviewers lack deep codebase familiarity in new projects or fast-paced environments, Uncle Bob Martin and Grady Booch have contrasting views Uncle Bob Martin advocates metrics-based oversight (test coverage, complexity, dependencies) and higher-level management over line-by-line AI code review, while Grady Booch stresses manual verification for vulnerabilities, dead code, and performance factors. Uncle Bob Martin : " I don’t review code written by agents . I measure things like test coverage, dependency structure, cyclomatic complexity, module sizes, mutation testing, etc. Much can be inferred about the quality of the code from those metrics. The code itself I leave to the AI. Humans are slow at code. To get productivity we humans need to disengage from code and manage from a higher level." Grady Booch : "Unlike B...
There's also an attibute you can add to the top of your class, it's broken in VS2005 beta2 for C#, but should work in RC. Note you have to prefix with the full namespace for some reason to get it to work.
ReplyDeleteC#
[System.ComponentModel.DesignerCategory("code")]
public class Form1 : System.Windows.Forms.Form
VB
<System.ComponentModel.DesignerCategory("code")> Public Class Form1