2 Comments

Glad you liked the piece, and clearly you're passionate on the subject.

I just read a contra view to your own -- Adam Unikowsky advocates for letting LLM/AI judges decide cases... He presents his case well, and it's worth a read.

https://adamunikowsky.substack.com/p/in-ai-we-trust

Expand full comment

Always fun to read your astute and thought provoking posts. It feels like the hype (and the marketing that led to it), around GenAI is what’s gotten us to this point. While I doubt they footnoted results in their early days, the cautionary note they include now in “For this reason, you should not rely on the factual accuracy of output from our models.”, makes absolute sense in that nothing coming out of these models should be relied upon even when the answer looks or is right (since “even broken clocks are right twice a day” ;). The fact is (at least in the U.S.) that even wrong information is protected speech. Sure, defamatory speech can be a thornier issue, but it’s only when we decide to treat LLMs as some sort of oracle or tool to get true answers, that things run amok. As a toy or novel and interesting technology, or frankly as a parlor trick, they’re fun, but as soon as companies start building these into real systems that we depend on or need to place great trust in, is when we get into trouble. This is especially frustrating when we see applications like tax preparation software for example, provide these tools for people to get answers to real taxation issues only to find out later that the answers they depended on were wrong. Even with a disclaimer, there’s no reason the tool should be made available at all given the lack of reliability in such contexts. Solving for how to make these tools compliant with regulations, or even worse, how to properly regulate these going forward seems like a fool’s errand as it brings up tons of complexity for something that doesn’t work yet all in the continued hope that they be held as truth-telling systems. What we should regulate are the companies that embed these services in their applications and we should hold them responsible for any damages. The fact that they just used an LLM and it’s not their fault is hogwash, if they chose to be irresponsible about placing this tech in their applications despite it’s obvious shortcomings, that should be viewed as negligence. Software companies have long sought to abdicate responsibility for their software with as-is provisos or in the case of DAOs by calling themselves decentralized to remove the accountability requirements of a centralized organization. Enough is enough, the riches made by companies deploying these technologies at the expense of their users needs to be reconciled with the damage they are also perpetrating through a lack of responsibility. In many cases however, it’s less the pure software company I’d be looking to hold responsible and more the application developers and providers (frequently in other industries since software is eating the world ;), that let end-users interact with these buggy or just plain defective technologies. It’s them making the decision that these systems are “good enough” when in fact they are not. Sure, we will hear the cries that this will stunt innovation, but that’s just excuses to enable these nascent companies to generate revenues at the expense of the rest of us.

Expand full comment