It’s a relatively short piece and doesn’t take that long to read through
An excerpt
At the time that I discovered Battlestar, I had been seeing news accounts of torture being used on people imprisoned at Guantánamo and Abu Ghraib. I could not understand how these kinds of things were allowed to happen or why other hit TV shows of the era, like 24 and Lost, seemed to sanction them. Even the brilliant programs that did engage with how 9/11 changed America—The Sopranos, The Wire, and The Shield—never really challenged the country’s actions. Battlestar did. As one scholar put it, the show asks the audience uncomfortable questions like, “What does it mean to be a human? What does it mean for a society to believe it is at war? Is it possible to be moral during times of profound crisis?”



Are they really using your interactions as additional training material, tho? Or is it just able to reference prior conversation for additional context? Doing the former would seem to leave it open to being deliberately screwed with for destructive purposes.
I remember reading Sam Altman from Open AI complaining that people bing polite cost him so much money by wasting costly precious cycles on its servers…
But was that due to training on such, or merely when parsing unnecessary input to the LLM across so many users?