Talk to ChatGPT for five minutes, and you start to think that it could take over any number of soft-skill jobs. We routinely pay for many services which are essentially conversations: teaching, therapy, medical diagnosis etc. Could we drop in a suitably prompted ChatGPT and get back a good-enough teacher?
A conversation with a teacher is highly structured. The teacher is using one or more pedagogical frameworks, explicitly or implicitly. It usually takes place within a cultural institution which brings its own values and frameworks. GPT, of course, has internal representations of these things, or it couldn’t make convincing responses. But without an interface for us to interrogate these frameworks, set goals, change them and explain choices made, a ChatGPT teacher is worse than useless.
Silicon Valley believes that they no longer need to dirty their hands with “structure”. Expert Systems now seem so old-fashioned that they’re a joke. All we need to do, they think, is point a sufficiently large LLM at the medical corpus, and we’ll get a diagnostician as good as any human alive. Maybe even better!
But what good is a Virtual Doctor, if we can’t program it? We need to be able to set goals and explain the behaviour of our Virtual Doctor almost as you would an intern, using the language of established medical and professional frameworks.
LLMs should be seen as an enabling technology for Expert Systems. Now that we finally have an API for human language, we can program interfaces to the actual professional frameworks that teachers, doctors, therapists, etc. use. The product is software that a human teacher can use to direct the Virtual Teachers, not with best-guess LLM-prompts, but with structured models of interaction relevant to their profession.