1 Comment
⭠ Return to thread

You're totally right, and I agree that this makes the answer to 'how do we solve this' quite hard! Deletion seems like at least one option, but that's contingent on the data subject becoming aware of the hallucinated / false data in the first place, which itself may be a hard problem.

For instance, I'm trying to get a full access request fulfilled by OpenAI and it's proving a challenge because they don't seem to get that hallucinated data about me is still personal data about me.

It's possible that at a technical level, it might be more along the lines of what they are doing now to address bias/misinformation/disinformation -- strict higher-level prompting and overrides by OpenAI along the lines of 'When a question is about a person, temperature should be 0.2 and sources must be provided for any claims made" -- which would default to a very rigid, strict and limited result, and control against hallucinations more effectively.

Expand full comment