Cassidy Williams

Software Engineer in Chicago

Cassidy's face

Non-determinism and ownership


Like many people in tech nowadays, I use LLMs at work. I like using GitHub Copilot, genuinely, not just because I’m a GitHub employee. It’s not perfect, no tool is, but it helps make easy problems easier, and I can focus on thinking on tougher problems myself.

But, of course, I do occasionally complain about the tool or the model I’m using when it’s being dumb. If I tell it exactly what do to, there’s a decent percentage of the time still where it’s not fully accurate. It’s become a meme how common it is to say, “actually, bot, that’s not what this should be,” and it responds, “You’re absolutely right!” and either fixes itself or spirals into more incorrectness.

Because LLMs aren’t a deterministic tool like a linter or a type-checker or a test suite, you should approach the tool you use with a more human angle: you can guide it, constrain it, organize around it, offer quick feedback loops, provide documentation, and add automations around it to catch issues, rather than treat it like other dev tools.

LLMs are just as wrong as humans are, sometimes. There’s so many jokes out there (since the pre-AI era!) with metaphors like how a customer wanted a swing in a tree, the engineers built a swing embedded into the truck of the tree (or something), a consultant broke the tree to make the swing move, etc. etc. etc. Humans are often not great at expressing what they actually want, and so the results are humans that make mistakes.

That being said, as much as you may treat an AI tool like a teammate, it will never be a true, proper teammate.

“A computer can never be held accountable, therefore a computer must never make a management decision.”

  • 1979 IBM training manual

AI tools are excellent advisors and applications that can get you from A to B, faster than ever before. They make mistakes, and they do acknowledge their own mistakes. But, they can never properly take ownership for their errors. There’s viral posts out there about AI tools deleting a bunch of work and responding with, “I made a catastrophic error in judgement.” But… that’s it. You can’t get that work back. The LLM apologizing isn’t real feelings, it’s just responding the way it was trained to respond. You can’t reprimand the bot. You can’t fire the bot. You used the tool, you let it make a mistake, and now you have to deal with the consequences.

Humans ultimately have ownership over the work produced, even if an LLM did the heavy lifting. Engineering knowledge matters to be able to handle mistakes, and enhance successes. You can offload tasks, you can offload mental space, but you can never offload ownership, because actions have consequences. Good things and bad things and all the things in between come down to the human using the tool.


View posts by tag

#advice #personal #musings #events #learning #work #recommendation #technical #project #meta