Simon Willison on AI and the Existential Dread of Superhuman Code Generation
In the first episode of the The Pragmatic Engineer podcast (24-September-2024) featuring Simon Willison, the co-creator of Django & creator of Datasette (an open-source tool for exploring and publishing data) discusses his extensive experience using large language models (LLMs) for coding.
Willison shares his initial awe and subsequent apprehension about LLMs' potential to replace programmers, highlighting both the immense productivity gains and the ethical considerations involved.
He details his current LLM workflow, emphasising the importance of learning how to effectively prompt and utilise these tools rather than focusing solely on theoretical understanding.
Finally, he offers advice for both experienced and junior engineers on leveraging LLMs to enhance their productivity and future-proof their careers.
Highlights -
Gergely Orosz: I'm interested in how AI development helps your specific workflow and where it doesn't. You had a really interesting story with that, a proper like "this is scary" moment. Can you talk about that?Simon: I've definitely had a few of those. I think every programmer who works with these models the first time it spits out like 20 lines of actually good code that solves your problem and does it faster than you would there's that moment when you're like, "Hang on a second, what am I even for?". But I had a bigger version of that with, actually, with my main open-source project. So I built this tool called Datasette, which is an interface for querying databases and ... like analysing data, creating JSON APIs on top of data, all of that kind of stuff. And the thing I've always been trying to solve with that is I feel like every human being should be able to ask questions off databases. Like it's absurd that everyone's got all this data about them, but we don't give them tools that let them actually, you know, dig in and explore it, and filter it, and try and answer questions that way. And then I tried this new feature of... ChatGPT that they launched last year called Code Interpreter mode. This is the thing where ChatGPT, you can ask a question, it could write some python code, and then it can execute that Python code for you and use the result to continue answering your question. And Code Interpreter mode has a feature where you can upload files to it. So I uploaded an SQLite database file to it, like just the same database files that I use in my own software, and I asked it the question. And it flawlessly answered it by composing the right SQL query, running that using the Python SQLite library, and spitting out the answer. And I sat there looking at this thinking, on one hand this is the most incredible example of like being able to ask questions of your data that I've ever seen, but on the other hand what am I even for? Like, I thought my life's purpose was to solve this problem and this thing, this new tool, is solving my problem without even really thinking about it. Like they didn't mention, "Oh, it could do SQLite SQL queries" as part of what it does, it's just like, "Python, um...". And that was a little bit existential dread.
- "Every programmer who works with these models the first time it spits out like 20 lines of actually good code that solves your problem and does it faster than you would, there’s that moment when you’re like hang on a second what am I even for." - This captures the initial "scary moment" many programmers experience when confronted with AI's coding prowess.
- "What am I even for? Like I thought my life’s purpose was to solve this problem and this thing, this new tool, is solving my problem without even really thinking about it." - This expresses the existential dread Willison felt when ChatGPT's code interpreter mode effortlessly solved a problem he had dedicated significant time and effort to.
- "The optimistic version, the version I take on, is I can use these tools better than anyone else for programming. I can take my existing programming knowledge and when I combine it with these tools I will run circles around somebody who’s never written a line of code in their life." - This encapsulates Willison's optimistic perspective on AI, viewing it as a "power user tool" that can significantly amplify a developer's capabilities.
- "I think every human being deserves to be able to automate dull things in their lives with a computer and today you almost need a computer science degree just to automate a dull thing in your life with a computer. That’s the thing which language models I think are taking a huge bite out of." - This highlights Willison's belief that AI has the potential to democratise programming, making it accessible to a wider audience.
- RAG (Retrieval Augmented Generation) is a technique where the user's question is used to search a relevant knowledge base. The retrieved information, alongside the question, is then fed to the LLM to generate a contextually grounded response.
- Simon Willison uses the analogy, "RAG is the 'Hello, World!' of building software on top of LLMs," to convey the concept's accessibility and foundational role in LLM application development. Just as "Hello, World!" is often the first program a novice programmer learns, RAG represents a relatively simple yet powerful technique to leverage LLMs.
- RAG just like everything else in language models is fractally interesting and complicated...it's simple at the top and then each little aspect of it gets more and more involved the further you look.
- GitHub Copilot utilises a sophisticated RAG (Retrieval Augmented Generation) mechanism. This means that when Copilot provides code suggestions, it doesn't simply analyse the code visible on your screen. Instead, it 1) Examines nearby code within the same file: This provides immediate context for the suggestion. 2) Searches for semantically similar code in other files within your project: This allows Copilot to offer suggestions that align with your overall project structure and coding patterns.
- This RAG implementation explains why Copilot can sometimes surprise you with suggestions that seem to "understand" your project at a deeper level. By drawing on a wider range of code from your project, Copilot can provide more contextually relevant and insightful suggestions.
- Willison points out that this RAG mechanism is largely undocumented. This lack of transparency can lead to misconceptions about how Copilot works and limits developers' ability to leverage its full potential.
- If you want to learn how to prompt LLMs the Anthropic Claude Prompting Guide is the actually the best thing I've seen anywhere.
Fun Facts:
- Willison tested early language models by trying to generate New York Times headlines for different decades.
- One of his tests for a new language model is to see if it can accurately explain the concept of "borrowing" in Rust programming.
- He refers to AI as his "weird intern".
- He uses ChatGPT's voice mode to have programming discussions while walking his dog.
- Willison has built an open-source command line tool called "llm" that allows you to pipe inputs into language models.
- He enjoys experimenting with AI's creative capabilities.
Willison's insights provide a valuable perspective on the evolving role of AI in software engineering, emphasizing the importance of embracing these new tools while acknowledging the ethical concerns and potential anxieties they raise.
Comments
Post a Comment