So the research is out and these LLMs will always be vunerable to poisoned data. That means it will always be worth out time and effort to poison these models and they will never be reliable.

  • ragas@lemmy.ml
    link
    fedilink
    arrow-up
    7
    ·
    2 months ago

    Most people seem to just half-brain the challenges anyway. So on images where its easy to confuse something, the tests will often refuse you unless you put in the wrong answer, just like everybody else.