The era of front-end development may be coming to a close, (yes, yes, future-is-not-evenly-distributed, we still have plenty of Ada programmers etc.).
LLMs hallucinate. They can’t help it. We can RLHF some of this out of them, but ultimately it’s inherent in their design. Because an LLM isn’t deterministic, it can’t replace the whole programming stack, at least not in one move. So, which piece of the stack will it replace first? I think it might be the front-end.
ChatGPT’s plugin system shows the way. The hallucinatory tendencies stop being a problem if we only use it as an interface for human language. As long as it can transform the user’s desires into an API request to a trusted system, e.g. Wolfram Alpha, then we’ll get a deterministic result.
Why do we need UIs at all? Why can’t we define APIs and let the operating system draw the data in its own way? Because in the standard programming model, any sufficient description of a complex interface is a UI. But LLM models don’t need a well-specified description of your API, they can infer its capabilities. What’s more, they can easily solve boilerplate problems like “draw a UI for interacting with this app”.
Imagine the Uber experience, as mediated by an LLM. The user expresses a desire for a ride, the LLM knows about the Uber API, connects to it, infers location data, renders a map, authorisation, a payment flow and whatever else is needed. By the way, this need not be a “chat” interface. The LLM can infer that you’re outside, away from home, and render a “Call a ride” button on your nearest device.
In this world, the route-to-the-customer for SAAS companies is “present an API to the user’s AI agent”. This is very different. I assume it would be much less open and harder for companies to make “sticky” experiences. But for the user this interface could be liberating, allowing us to think in terms of the job-to-be-done, rather than which app to open and which button to press.