Discussion about this post

User's avatar
direwolff's avatar

Really great thought provoking piece here…again 😃. So this got me thinking about libel law and defamation. First off, I’m not a lawyer and do not even play one on TV, not to mention that I’m an idiot, so lacking the mental capacity to be one even if I wanted to be. With that out of the way…

“To prove prima facie defamation, a plaintiff must show four things: 1) a false statement purporting to be fact; 2) publication or communication of that statement to a third person; 3) fault amounting to at least negligence; and 4) damages, or some harm caused to the reputation of the person or entity who is the subject of the statement.”

If a statement was made by Gemini or ChatGPT about your friend Heidi, but unlike her examples, it went a bit further to say that she’s renowned to be wholly untrustworthy and unreliable as an attorney because of several blown cases, and then it turns out that this was all a hallucination, totally fabricated, could Google or OpenAI be held liable given their clear negligence in not having taken measures (and even if they did but these still failed) to prevent users of their services find these outright false statements that their bots were claiming to be true? When these LLMs are under the covers of other applications, disclaimers on their results quality that may exist are not usually evident. To the extent Heidi could show that she lost business after people consulted with the bots to determine if she was a suitable attorney for their needs, could she go after these companies with a credible case? Let's take it a bit further and say Heidi tested this herself and found this error, reached out to Google or OpenAI to let them know of the error. These companies claim that they don't know how this happens and that not practicable to fix. Then the initial case happens where she actually loses one or two clients to these false statements, can she claim defamation then since now the companies have been made aware of the issue but have done nothing to remedy the offense? 🤔

Expand full comment
Mahdi Assan's avatar

I agree with your argument here, that hallucinated data can still be personal data. I am wondering though if there is a practical problem with this. What measures can be put in place to rectify the inaccuracies in the hallucinated data? I think this might be a bit tricky. My understanding is that the hallucination (or the inaccuracy) is not necessarily a function of inaccurate training data. It is a function of the model. It generates a probability distribution over its training data and uses this to predict what the best response to the prompt should be. So even if all the training data is 100% factually accurate (though not sure if this is possible) the model could still produce inaccurate outputs because the nature of that output is probabilistic and not deterministic (and the model is so big and complex that it is hard to anticipate its behaviour sometimes). That is not to say that the right to rectification does not exist, for if the hallucinated data is personal data then data subjects should be able to exercise their rights. It is more of a question of practical fulfilment of the right to rectification. Curious what you think of this.

Expand full comment
2 more comments...

No posts