jaykrown@lemmy.worldM to AI News@lemmy.worldEnglish · 14 days agoFrance will investigate Musk’s Grok chatbot after Holocaust denial claimsapnews.comexternal-linkmessage-square2linkfedilinkarrow-up121arrow-down12cross-posted to: [email protected][email protected][email protected]
arrow-up119arrow-down1external-linkFrance will investigate Musk’s Grok chatbot after Holocaust denial claimsapnews.comjaykrown@lemmy.worldM to AI News@lemmy.worldEnglish · 14 days agomessage-square2linkfedilinkcross-posted to: [email protected][email protected][email protected]
minus-squareGrandwolf319@sh.itjust.workslinkfedilinkEnglisharrow-up1arrow-down1·14 days agoI don’t understand this. All LLMs can hallucinate, it’s a feature. Hopefully what they mean is take this opportunity to put some regulations on all LLMs
minus-squaregbzm@piefed.sociallinkfedilinkarrow-up1·9 days agoStill though, who’s liable when they hallucinate something illegal?
I don’t understand this.
All LLMs can hallucinate, it’s a feature.
Hopefully what they mean is take this opportunity to put some regulations on all LLMs
Still though, who’s liable when they hallucinate something illegal?