• @[email protected]
    link
    fedilink
    019 days ago

    The amount of times I’ve seen a question answered by “I asked chatgpt and blah blah blah” and the answer being completely bullshit makes me wonder who thinks asking the bullshit machine™ questions with a concrete answer is a good idea

    • @[email protected]
      link
      fedilink
      English
      019 days ago

      A lot of people really hate uncertainty and just want an answer. They do not care much if the answer is right or not. Being certain is more important than being correct.

      • @[email protected]
        link
        fedilink
        019 days ago

        Why not just read the first part of a wikipedia article if they want that though? It’s not the end all source but it’d better than asking the machine known to make things up the same question.

        • @[email protected]
          link
          fedilink
          English
          019 days ago

          Because the AI propaganda machine is not exactly advertising the limitations, and the general public sees LLMs as a beefed up search engine. You and I know that’s laughable, but they don’t. And OpenAI sure doesn’t want to educate people - that would cost them revenue.

    • Tar_Alcaran
      link
      fedilink
      019 days ago

      This is your reminder that LLMs are associative models. They produce things that look like other things. If you ask a question, it will produce something that looks like the right answer. It might even BE the right answer, but LLMs care only about looks, not facts.

    • @[email protected]
      link
      fedilink
      019 days ago

      I don’t see the point either if you’re just going to copy verbatim. OP could always just ask AI themselves if that’s what they wanted.

    • @[email protected]
      link
      fedilink
      English
      018 days ago

      Yeah, don’t use a hallucinogenic machine for truth about the universe. That is just asking for trouble.

      Use it to give you new ideas. Be creative together. It works exceptionally well for that.

    • @[email protected]
      link
      fedilink
      018 days ago

      We’re in a post truth world where most web searches about important topics give you bullshit answers. But LLMs have read basically all the articles already and has at least the potential make deductions and associations about it - like this belongs to “propaganda network 4335”. Or “the source of this claim is someone who has engaged in deception before”. Something like a complex fact check machine.

      This is sci-fi currently because it’s an ocean wide but can’t think deeply or analyze well, but if you press GPT about something it can give you different “perspectives”. The next generations might become more useful in this in filtering out fake propaganda. So you might get answers that are sourced and referenced and which can also reference or dispute wrong answers / talking points and their motivation. And possibly what emotional manipulation and logical fallacies they use to deceive you.

      • @[email protected]
        link
        fedilink
        017 days ago

        Hey MuskAI, is this verifiable fact about Elon’s corruption true?

        No, that’s fake news. Here’s a few conspiracy blogs that prove it. Buy more Trump Coin 💰🇺🇸

          • @[email protected]
            link
            fedilink
            017 days ago

            Respectfully, you have no clue what you’re talking about if you don’t recognize that case as the exception and not the rule.

            Many of these early generation LLMs are built from the same model or trained on the same poorly curated datasets. They’re not yet built for pushing tailored propaganda.

            It’s trivial to bake bias into a model or put guardrails up. Look at deepseek’s lock down on any sensitive Chinese politics. You don’t even have to be that heavy handed, just poison the training data with a bunch of fascist sources.

            • @[email protected]
              link
              fedilink
              017 days ago

              You are arguing there is a possibility it will go that way, while I was talking about a possibility of a more advanced AI that is open source, has verifiable arguments with sources. While the negative outcome is very important, you’re practically dog-piling me to suppress a possible positive outcome.

              RIGHT NOW even without AI the vast majority of people are simply unable to perceive reality on certain important topics. Because of propaganda, polarization, profit seeking through clickbait, and other effects. You can’t trust, and you can’t verify because you ain’t got the time.

              My argument is that a more advanced and open source AI could provide reliable information because it has the capability to filter and analyze a vast ocean of data.

              My argument is that this potential capability might be crucial to escape the current (non AI) misinformation epidemic. What you are arguing is not an argument against what I’m arguing.

              • @[email protected]
                link
                fedilink
                017 days ago

                I apologize if my phrasing is combative; I have experience with this topic and get a knee-jerk reaction to supporting AI as a literacy tool.

                Your argument is flawed because it implicitly assumes that critical thinking can be offloaded to a tool. One of my favorite quotes on that:

                The point of cognitive automation is NOT to enhance thinking. The point of it is to avoid thinking in the first place.

                (coincidentally from an article on the topic of LLM use for propoganda)

                You can’t “open source” a model in a meaningful and verifiable way. Datasets are massive and, even if you had the compute to audit them, poisoning can be much more subtle than explicitly trashing the dataset.

                For example, did you know you can control bias just by changing the ordering of the dataset? There’s an interesting article from the same author that covers well known poisoning vectors, and that’s already a few years old.

                These problems are baked in to any AI at this scale, regardless of implementation. The idea that we can invent a way out of a misinformation hell of our own design is a mirage. The solution will always be to limit exposure and make media literacy a priority.