How to help someone use AI


Teach you, I will

So you've learned enough about AI tools to be dangerous, and now you want to pay it forward. You want to pass that knowledge on and help someone else.

Commendable! (as my good friend ChatGPT would say)

Not so fast.

Have you ever been bruised, demotivated, or deflated by an abrasive, condescending teacher? Of course you have. We all have.

(If we're being honest, most of us have BEEN the abrasive teacher at one point or another.)

Recently, I came across some ancient wisdom, and it applies perfectly to the art of teaching another person how to interact with our new and suddenly-ubiquitous AI assistants.

Ancient wisdom, in this context, means it comes from the 1990s, the early days of the personal computer era.

The following is adapted from How to help someone use a computer by Phil Agre.


AI practitioners are fine human beings, but they do a lot of harm in the ways they "help" other people use AI. Here's what's ACTUALLY helpful when you're helping people use AI tools.

First you have to tell yourself some things:

  • Nobody is born knowing this stuff.
  • You've forgotten what it's like to be a beginner.
  • If it's not obvious to them, it's not obvious.
  • An AI tool is a means to an end. The person you're helping probably cares mostly about the end. This is reasonable.
  • Their knowledge of the AI is grounded in what they can do and see -- "when I do this, it does that". They need to develop a deeper understanding, but this can only happen slowly -- and not through abstract theory but through the real, concrete situations they encounter in their work.
  • Beginners face a language problem: they can't ask questions because they don't know what the words mean, they can't know what the words mean until they can successfully use the system, and they can't successfully use the system because they can't ask questions.
  • You are the voice of authority. Your words can wound.
  • By the time they ask you for help, they've probably tried several things. As a result, their chat might be in a strange state. This is natural.
  • They might be afraid that you're going to blame them for the problem.
  • The best way to learn is through apprenticeship -- that is, by doing some real task together with someone who has a different set of skills.
  • Your primary goal is not to solve their problem. Your primary goal is to help them become one notch more capable of solving their problem on their own. So it's okay if they take notes.
  • Most user interfaces are terrible. When people make mistakes it's usually the fault of the interface. You've forgotten how many ways you've learned to adapt to bad interfaces.
  • Knowledge lives in communities, not individuals. An AI user who's part of a community of AI users will have an easier time than one who isn't.

Having convinced yourself of these things, you are more likely to follow some important rules:

  • Don't take the keyboard. Let them do all the typing, even if it's slower that way, and even if you have to point them to every key they need to type. That's the only way they're going to learn from the interaction.
  • Find out what they're really trying to do. Is there another way to go about it?
  • Maybe they can't tell you what they've done or what happened. In this case you can ask them what they are trying to do and say, "Show me how you do that".
  • Attend to the symbolism of the interaction. Try to squat down so your eyes are just below the level of theirs. When they're looking at the UI, look at the UI. When they're looking at you, look back at them.
  • When they do something wrong, don't say "no" or "that's wrong". They'll often respond by doing something else that's wrong. Instead, just tell them what to do and why.
  • Try not to ask yes-or-no questions. Nobody wants to look foolish, so their answer is likely to be a guess. "Did you use zero shot or chain of thought?" will get you less information than "What did you say when you started the chat?".
  • Explain your thinking. Don't make it mysterious. If something is true, show them how they can see it's true. When you don't know, say "I don't know". When you're guessing, say "let's try ... because ...". Resist the temptation to appear all-knowing. Help them learn to think the problem through.
  • Be aware of how abstract your language is. "Start a new chat" is abstract and "press this key" is concrete. Don't say anything unless you intend for them to understand it. Keep adjusting your language downward towards concrete units until they start to get it, then slowly adjust back up towards greater abstraction so long as they're following you. When formulating a take-home lesson ("when it does this and that, you should try such-and-such"), check once again that you're using language of the right degree of abstraction for this user right now.
  • Whenever they start to blame themselves, respond by blaming the AI. Then keep on blaming the AI, no matter how many times it takes, in a calm, authoritative tone of voice. If you need to show off, show off your ability to criticize bad design. When they get nailed by a false assumption about the AI's behavior, tell them their assumption was reasonable. Tell yourself that it was reasonable.
  • Take a long-term view. Who do users in this community get help from? If you focus on building that person's skills, the skills will diffuse to everyone else.
  • Never do something for someone that they are capable of doing for themselves.
  • Don't say "it's in the manual". (because LLMs have no manual)

What do you think?

Does this sound like a good approach to you? Hit 'Reply' and let me know. I read every message.

Adam


Whenever you're ready, here are 2 ways I can help you:

[Individual] One-off AI Coaching Call: Get unstuck! Solve a problem or address an opportunity with AI assistance

[Team] AI Accelerator Program: An interactive workshop series for teams. Accelerate your team’s AI adoption journey and start reaping the benefits of AI more fully

Adam Lorton

I help executives and their teams combine the power of AI with the principles of Deep Work to - Get unstuck - Move faster - Deliver excellent experiences for customers Subscribe for prompts, case studies, and stories in your inbox weekly!

Read more from Adam Lorton

GPT-4ohmygosh Howdy! Yesterday, along with 2.1 million other AI-heads, I tuned in to see OpenAI's latest product announcement. Here's the ultra-quick summary: The new model is called 'GPT-4o' where "o" stands for "omnimodel" It's free to the public It's still smart and faster than ever It has more expressive voices OpenAI is releasing a desktop app to go with it "A desktop app?" you ask. "Is that really news?" It's big news, and I'll tell you why 👇 (3 minute video) OpenAI announces GPT-4o If...

Claude 3 has entered the chat Hello, my name is Claude - Adam Lorton & Midjourney Monday, Anthropic AI made an announcement many thought improbable: they'd shipped a model that outperforms GPT-4 (the model you get when you pay for ChatGPT Plus). GPT-4 has been the undisputed heavyweight champion of large language models (LLMs) for nearly a year. Anthropic @AnthropicAI Today, we're announcing Claude 3, our next generation of AI models. The three state-of-the-art models—Claude 3 Opus, Claude 3...

Interesting Observations on LLM Introspection Last week, I ran a ChatGPT training with people who do executive search for a living. As part of the training, we asked ChatGPT to help us develop a process to evaluate a candidate's fit for a role. One thing we asked for was weights -- how much should the interview be worth? How much should assessments be worth? Work sample? Recruiter's notes? This is what ChatGPT suggested: Interview transcript = 40% Recruiter's notes = 25% Resume / CV review =...