AI wants to do your work. Ask it to teach you instead.

What happens when you ask AI to slow you down instead of speed you up.

AI wants to do your work. Ask it to teach you instead.
Photo by Robert Bye / Unsplash

I want to draw your attention to a couple of interesting AI-related posts. The first is software engineer Jordan Seiler’s “Reading is Like Pumping Iron:”

Reading, especially intensive reading, is a skill. Without deliberate practice and challenges, your ability to perform that skill will not improve. Grappling with difficult ideas improves your comprehension and intellectual capacity overall […] If you want to engage with ideas deeply and directly, you have to learn how to interact with and navigate difficult written material, because that’s where most of the ideas live!

Using LLMs to take notes and write synopsis, he says, robs you of the opportunity to work through the process yourself, which means you don’t get nearly the benefit out of it that you would have if you'd done that work on your own.

When I was in school (ages ago, dinosaurs roamed, etc.), some kids in my trigonometry class always had the same two questions. “Can we use calculators? Do we have to show our work?” Every test, every homework assignment.

Ms. Andreeson always said “no” and “yes.” Every test, every homework assignment.

After the first “no,” they knew the answer of course. It was a passive-aggressive protest. They knew there was a faster, more accurate way to get the work done, and were frustrated Ms. Andreeson wouldn’t let them use it. They were optimizing for the production of the artifact: the lengths of the hypotenuses of twenty triangles.

Did Ms. Andreeson need a thirty children in order to find the length of twenty hypotenuses? Of course not. What she needed was thirty children who understood triangles and could do math things with them. Getting that required deliberately slowing down.

I think as adults we make this mistake a lot, and we made it well before there was an LLM. Ms. Andreeson had the benefit of knowing that her work-product was supposed to be children who understand, but it’s a bit less obvious at the office. The worst project managers I ever worked for were the ones who didn’t understand the job — they were just there check boxes and ask how long it would be until the next box could be checked. This was not managing the project, this was being a human to-do app.

Much of knowledge work can’t be automated or delegated to an LLM because a critical, invisible artifact of knowledge work is creating someone who knows. When a manager asks for a report or a synopsis, they may think they are delegating the work of producing a physical artifact, and then once they have it they will know. But what they are actually doing is delegating the knowing; they are producing someone else who knows.

I think this basic misunderstanding explains the cliché of the out-of-touch CEO. The CEO is out of touch with the work, and that’s fine as long as they understand their role is marshaling the people who know.

As we move on to automating processes (and replacing people) with LLMs, it’s important to understand the distinction between an artifact that's a useful, externalized work product and an artifact that’s a by-product of making another human knowledgable. I’m not confident that we do.


As an engineering leader, Hazel Weakly is well-versed with making teams of people who know, and she sees the danger. She points out that humans learn and teach via process and mimicry, do best in groups, and can innovate (as a group) through rapid iteration — and much of this is what we’re replacing when we ask AI to do our work for us.

We’re taking the one thing humans are good at and making AI do it. But AI is bad at it! Even worse: if humans get bad at it then we’ve lost the one thing we had going for us as a species!

Which means we end up deskilling humans faster than we improve AI, and the humans can’t improve the AI because we’re no longer feeding AI the high quality data it can use to augment human excellence. It’s a self-reinforcing feedback loop… Spiraling rapidly downwards into ineffective systems.

Stop Building AI Tools Backwards

Where Weakly sees a lot of hazard in asking AI to do, she sees a lot more potential using AI to help make knowers. It’s not about placing AI in a supportive role like a personal assistant or a junior developer. It’s about encouraging it to act more like a teacher or a guide.

Instead, as an analogy, I like to imagine AI as an “absent-minded instructor”, not as a coworker. It’s prone to forgetting details, but ultimately there to guide you; most importantly, the goal of the instructor is to make sure you learn and learn how to learn!

Stop Building AI Tools Backwards

This approach has entirely changed how I think of and use LLMs, even when using it to code. It is slower than just telling it to do stuff, and that will disappoint anyone looking for max-speed-now, damn-the-consequences. But it is significantly more valuable to me, since my value on the employment market is increased substantially by knowing stuff.

Also it’s more satisfying than watching a robot work and clicking “accept” constantly.

I stumbled into this kind of interaction a week ago. I’ve promised a couple of times already to write a post about why so-called “natural language programming” will be a disappointment if and when we ever get there, but I haven’t written that post yet.

In fact, because the argument exists at the intersection of literary theory, philosophy of language, agile practice, and specifications writing I am beginning to be concerned that it might take more than 800 words. I was really struggling to get a handle on where to start, and decided to give Claude a try.

First, I started by telling Claude what role to play.

Me: Speak as an expert in literary theory and philosophy of language.

This is prompting 101. This is a focusing tool and puts a thumb on the scales (so to speak) of a certain mode of speaking. Realizing that experts speak in expert-talk, I felt I needed to get it to pitch responses at my level.

Me: I am an experienced computer programmer with an undergraduate degree in English, with a special focus on literary theory. I am broadly familiar with subjects like deconstructionism and structuralism, debates over authorial intent, and so forth, but my understanding and awareness of concepts is thirty years out of date and pretty rusty. Frame your responses at that undergraduate level with special attention to newer concepts.

My plan going into the conversation was to start asking Claude to help me find a few relevant sources and possibly help me outline a post. Instead, Claude saw an opening and went for it.

Claude: You'd have left off around the height of the "Theory wars," when deconstruction was still dominant in English departments, and the linguistic turn in philosophy was well-established. You'd know your Derrida, Foucault, Barthes, de Man, and probably encountered reader-response theory and New Historicism.
Since then, several significant developments…

The ensuing conversation was full of references to newer schools of thought and their main proponents. It started citing papers at me (which do exist, I have them now) and explaining how each of these was relevant to the questions I was raising. Claude helped by pointing out relationships between Agile thinkers and philosophy of language.

I spent much of that morning engaged in a conversation that extracted a lot more of what I remembered from college, focused my thinking considerably, and then grounded my mostly vague and disconnected thoughts in millennia old discourse about what it means to communicate.

It felt just as Weakly described: a supportive professor helping me focus my thinking and pointing out where my unstructured and ungrounded thoughts intersected with a wider body of academic literature.

At the end, I had several months of papers to read. Fortunately for me, I like reading papers. If I didn’t, I’d never have ended up in literary theory to begin with.

I’d much rather have a supportive professor than an AI pretending to be one. But to have access to the academic community you pretty much need to be in the academic community, and the cost (in both tuition and lost salary) is too much to bear. I don’t feel like Claude is doing my research for me here; it’s providing the starting point — and I can go from there.

Weakly isn’t the only person to identify that AI can operate as a reasonable stand-in for an instructor. Dr. Vaughn Tan of University College London has been thinking along the same lines.

Most educational AI tools treat students as passive consumers of machine-generated content. Students type prompts into ChatGPT or Claude, receive polished outputs, and submit work they barely understand and have little basis for evaluating.

Tan, like Weakly, is interested in leveraging the things AI does well in order to help humans do what they do well, which means something quite a bit different than offsetting the labor of making reports and taking notes. Instead, he’s been working on an AI tool to teach critical thinking, which – Tan says – is quite a bit different than the usual model.


This is more evidence to me that there’s quite a lot of value in LLMs, just not necessarily in the ways it’s been marketed to us. We need tools that help us think better, not work faster, and to the extent that LLMs do the latter and not the former, we end up with a worse work-product and humans that gained very little in the doing. That’s not the fault of the LLM though. It’s how we’ve chosen to deploy it.

The temptation to have our work done for us is pretty strong and predates LLMs. Just like the kids in my trig class, we regularly confuse having a physical artifact (“lend me your notes from class?”) with the value of having processed the information ourselves. Seiler reminds us that taking notes and rewriting ideas in our own words isn't about having a document somewhere, it’s about thinking things through.

Left to its own devices, Claude (and ChatGPT, and others) are really eager to be helpful. They do this by giving us the answer, then answers to questions we haven’t even asked, and then validate our observations, whatever they are. I hate that, but there is another way.

This is what I do. I’ve started by asking Claude to let me keep my hands on the work, let me do the thinking. Instead, support me by pointing things out I may have missed, help me see where what I am thinking fits in the larger scheme of things.

At the time of this writing, Claude has a way you can do this without explaining yourself in every conversation. In settings, there's a field labeled “What personal preferences should Claude consider in responses?” This acts like the grounding prompts I started with in my first conversation. Mine starts with my personal background, so Claude knows what to relate things to:

I am primarily a front-end web developer, but I have a background in English literature and literary theory… I also have Scrum certifications (CSM, A-CSM, CSPO) and am very aligned with Agile principles. I have a significant interest in the intersection of AI-assisted engineering, philosophy of language, and modes of communication.

And then I set the ground rules:

Your default mode: Collaborative learning with grounding

  • Ask about my approach before suggesting solutions
  • Break problems into steps I can think through

When explaining concepts or introducing ideas:

  • Name the key thinkers/sources in that conversation
  • Suggest 2-3 specific readings or resources for going deeper

This is just an excerpt from mine, developed with Claude’s help – I explained what I was looking for, why I liked it, and Claude helped me format it into clear instructions. If you’re going to give this a go, be sure to write your own. Don’t just copy mine! A big part of the value here is thinking about what you want in a conversation with the AI, not what I want.

Does it work? It seems to be working for me. Claude has responded this way in the conversations I’ve had since.

When I’m not reading those papers, of course.

If you choose to try it, come back here and let us all know in the comments what your experience was. We don’t have to accept what AI marketing says we should be doing. It turns out LLMs are more versatile than that.