Programming, Fluency, and AI

Are you using AI as a crutch?

By Mike Loukides
July 9, 2024
Painting on wall of girl blowing question marks. Painting on wall of girl blowing question marks. (source: Matthew Paul Argall on Flickr)

It’s clear that generative AI is already being used by a majority—a large majority—of programmers. That’s good. Even if the productivity gains are smaller than many think, 15% to 20% is significant. Making it easier to learn programming and begin a productive career is nothing to complain about either. We were all impressed when Simon Willison asked ChatGPT to help him learn Rust. Having that power at your fingertips is amazing.

But there’s one misgiving that I share with a surprisingly large number of other software developers. Does the use of generative AI increase the gap between entry-level junior developers and senior developers?

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Generative AI makes a lot of things easier. When writing Python, I often forget to put colons where they need to be. I frequently forget to use parentheses when I call print(), even though I never used Python 2. (Very old habits die very hard, there are many older languages in which print is a command rather than a function call.) I usually have to look up the name of the pandas function to do, well, just about anything—even though I use pandas fairly heavily. Generative AI, whether you use GitHub Copilot, Gemini, or something else, eliminates that problem. And I’ve written that, for the beginner, generative AI saves a lot of time, frustration, and mental space by reducing the need to memorize library functions and arcane details of language syntax—which are multiplying as every language feels the need to catch up to its competition. (The walrus operator? Give me a break.)

There’s another side to that story though. We’re all lazy and we don’t like to remember the names and signatures of all the functions in the libraries that we use. But is not needing to know them a good thing? There’s such a thing as fluency with a programming language, just as there is with human language. You don’t become fluent by using a phrase book. That might get you through a summer backpacking through Europe, but if you want to get a job there, you’ll need to do a lot better. The same thing is true in almost any discipline. I have a PhD in English literature. I know that Wordsworth was born in 1770, the same year as Beethoven; Coleridge was born in 1772; a lot of important texts in Germany and England were published in 1798 (plus or minus a few years); the French revolution was in 1789—does that mean something important was happening? Something that goes beyond Wordsworth and Coleridge writing a few poems and Beethoven writing a few symphonies? As it happens, it does. But how would someone who wasn’t familiar with these basic facts think to prompt an AI about what was going on when all these separate events collided? Would you think to ask about the connection between Wordsworth, Coleridge, and German thought, or to formulate ideas about the Romantic movement that transcended individuals and even European countries? Or would we be stuck with islands of knowledge that aren’t connected, because we (not the AIs) are the ones that connect them? The problem isn’t that an AI couldn’t make the connection; it’s that we wouldn’t think to ask it to make the connection.

I see the same problem in programming. If you want to write a program, you have to know what you want to do. But you also need an idea of how it can be done if you want to get a nontrivial result from an AI. You have to know what to ask and, to a surprising extent, how to ask it. I experienced this just the other day. I was doing some simple data analysis with Python and pandas. I was going line by line with a language model, asking “How do I” for each line of code that I needed (sort of like GitHub Copilot)—partly as an experiment, partly because I don’t use pandas often enough. And the model backed me into a corner that I had to hack myself out of. How did I get into that corner? Not because of the quality of the answers. Every response to every one of my prompts was correct. In my postmortem, I checked the documentation and tested the sample code that the model provided. I got backed into the corner because of the one question I didn’t know that I needed to ask. I went to another language model, composed a longer prompt that described the entire problem I wanted to solve, compared this answer to my ungainly hack, and then asked, “What does the reset_index() method do?” And then I felt (not incorrectly) like a clueless beginner—if I had known to ask my first model to reset the index, I wouldn’t have been backed into a corner.

You could, I suppose, read this example as “see, you really don’t need to know all the details of pandas, you just have to write better prompts and ask the AI to solve the whole problem.” Fair enough. But I think the real lesson is that you do need to be fluent in the details. Whether you let a language model write your code in large chunks or one line at a time, if you don’t know what you’re doing, either approach will get you in trouble sooner rather than later. You perhaps don’t need to know the details of pandas’ groupby() function, but you do need to know that it’s there. And you need to know that reset_index() is there. I have had to ask GPT “Wouldn’t this work better if you used groupby()?” because I’ve asked it to write a program where groupby() was the obvious solution, and it didn’t. You may need to know whether your model has used groupby() correctly. Testing and debugging haven’t, and won’t, go away.

Why is this important? Let’s not think about the distant future, when programming-as-such may no longer be needed. We need to ask how junior programmers entering the field now will become senior programmers if they become overreliant on tools like Copilot and ChatGPT. Not that they shouldn’t use these tools—programmers have always built better tools for themselves, generative AI is the latest generation in tooling, and one aspect of fluency has always been knowing how to use tools to become more productive. But unlike earlier generations of tools, generative AI easily becomes a crutch; it could prevent learning rather than facilitate it. And junior programmers who never become fluent, who always need a phrase book, will have trouble making the jump to seniors.

And that’s a problem. I’ve said, many of us have said, that people who learn how to use AI won’t have to worry about losing their jobs to AI. But there’s another side to that: People who learn how to use AI to the exclusion of becoming fluent in what they’re doing with the AI will also need to worry about losing their jobs to AI. They will be replaceable—literally—because they won’t be able to do anything an AI can’t do. They won’t be able to come up with good prompts because they will have trouble imagining what’s possible. They’ll have trouble figuring out how to test, and they’ll have trouble debugging when AI fails. What do you need to learn? That’s a hard question, and my thoughts about fluency may not be correct. But I would be willing to bet that people who are fluent in the languages and tools they use will use AI more productively than people who aren’t. I would also bet that learning to look at the big picture rather than the tiny slice of code you’re working on will take you far. Finally, the ability to connect the big picture with the microcosm of minute details is a skill that few people have. I don’t. And, if it’s any comfort, I don’t think AIs do either.

So—learn to use AI. Learn to write good prompts. The ability to use AI has become “table stakes” for getting a job, and rightly so. But don’t stop there. Don’t let AI limit what you learn and don’t fall into the trap of thinking that “AI knows this, so I don’t have to.” AI can help you become fluent: the answer to “What does reset_index() do?” was revealing, even if having to ask was humbling. It’s certainly something I’m not likely to forget. Learn to ask the big picture questions: What’s the context into which this piece of code fits? Asking those questions rather than just accepting the AI’s output is the difference between using AI as a crutch and using it as a learning tool.

Post topics: AI & ML, Artificial Intelligence
Post tags: Signals
Share:

Get the O’Reilly Radar Trends to Watch newsletter