- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
So the research is out and these LLMs will always be vunerable to poisoned data. That means it will always be worth out time and effort to poison these models and they will never be reliable.


Most people seem to just half-brain the challenges anyway. So on images where its easy to confuse something, the tests will often refuse you unless you put in the wrong answer, just like everybody else.