• @[email protected]
    link
    fedilink
    English
    162 months ago

    The death panels Republican fascists claim Democrats were doing are now here, and it’s being done by Republicans.

    I hate this planet

  • ✺roguetrick✺
    link
    fedilink
    English
    112 months ago

    What are you going to train it off of since basic algorithms aren’t sufficient? Past committee decisions? If that’s the case you’re hard coding whatever human bias you’re supposedly trying to eliminate. A useless exercise.

    • @[email protected]
      link
      fedilink
      English
      62 months ago

      A slightly better metric to train it on would be chances of survival/years of life saved thanks to the transplant. However those also suffer from human bias due to the past decisions that influenced who got a transpant and thus what data we were able to gather.

      • ✺roguetrick✺
        link
        fedilink
        English
        3
        edit-2
        2 months ago

        And we do that with basic algorithms informed by research. But then the score gets tied and we have to decide who has the greatest chance of following though on their regimen based on things like past history and means to aquire the medication/go to the appointments/follow a diet/not drink. An AI model will optimize that based on wild demographic data that is correlative without being causative and end up just being a black box racist in a way that a committee that has to clarify it’s thinking to other members couldn’t, you watch.

    • @[email protected]
      link
      fedilink
      English
      22 months ago

      Nah bud, you just authorize whatever the doctor orders are because they are more knowledgable of the situation.

      • @[email protected]
        link
        fedilink
        English
        12 months ago

        That makes logical sense, but what about the numbers? They can’t go up if we keep spending the money we promised to spend on the 69th most effective and absolutely most expensive healthcare system in the world. What is this, an essential service? Rubes.

  • FaceDeer
    link
    fedilink
    62 months ago

    Yeah, I’d much rather have random humans I don’t know anything about making those “moral” decisions.

    If you’re already answered, “No,” you may skip to the end.

    So the purpose of this article is to convince people of a particular answer, not to actually evaluate the arguments pro and con.

  • @[email protected]
    link
    fedilink
    English
    52 months ago

    Say what you will about Will Smith, but his movie iRobot made a good point about this 17 years ago.

    (damn I’m old)

  • @[email protected]
    link
    fedilink
    English
    32 months ago

    Yeah. It’s much more cozy when a human being is the one that tells you you don’t get to live anymore.

  • @[email protected]
    link
    fedilink
    English
    22 months ago

    That’s not what the article is about. I think putting some more objectivety into the decisions you listed for example benefits the majority. Human factors will lean toward minority factions consisting of people of wealth, power, similar race, how “nice” they might be or how many vocal advocates they might have. This paper just states that current AIs aren’t very good at what we would call moral judgment.

    It seems like algorithms would be the most objective way to do this, but I could see AI contributing by maybe looking for more complicated outcome trends. Ie. Hey, it looks like people with this gene mutation with chronically uncontrolled hypertension tend to live less than 5years after cardiac transplant - consider weighing your existing algorithm by 0.5%

  • mechoman444
    link
    fedilink
    English
    12 months ago

    I still remember “death panels” from the Obama era.

    Now it’s ai.

    Whatever.

    • @[email protected]
      link
      fedilink
      English
      12 months ago

      everything republicans complained about can be done under Trump twice as bad, twice as evil and they will be ‘happy’ and sing his praises

  • @[email protected]
    link
    fedilink
    English
    12 months ago

    I don’t mind AI. It is simply a reflection of whoever is in charge of it. Unfortunately, we have monsters who direct humans and AI alike to commit atrocities.

    We need to get rid of the demons, else humanity as a whole will continue to suffer.

    • @[email protected]
      link
      fedilink
      English
      32 months ago

      If it wasn’t exclusively used for evil it would be a wonderful thing.

      Unfortunately we also have capitalism. So everything has to be just the worst all the time so that the worst people alive can have more toys.

      • @[email protected]
        link
        fedilink
        English
        12 months ago

        Thing is, those terrible people don’t enjoy the everything that they already own, and don’t understand that they are killing cool things in the crib. People make inventions and entertain if they can…because it is fun, and they think they got neat things to show the world. Problem is, prosperity is needed to allow people to have the luxury of trying to create.

        The wealthy are murdering the golden geese of culture and technology. They won’t be happier for it, and will simply use their chainsaw to keep killing humanity in a desperate wish of finding happiness.

          • @[email protected]
            link
            fedilink
            English
            12 months ago

            Why shouldn’t they have long term benefits for researchers?

            Reminds me a bit of when CRISPR got big, people were worried to no end about potential dangers, designer babies, bioterrorism (“everybody can make a killer virus in their garage now”) etc. In reality, it has been a huge leap forward for molecular biology and has vastly helped research, cancer treatment, drug development and many other things. I think machine learning could have a similar impact. It’s already being used in development of new drugs, genomics, detection of tumours just to name a few

  • Steve Dice
    link
    fedilink
    English
    02 months ago

    Hasn’t it been demonstrated that AI is better than doctors at medical diagnostics and we don’t use it only because hospitals would have to take the blame if AI fucks up but they can just fire a doctor that fucks up?

    • @[email protected]
      link
      fedilink
      English
      22 months ago

      I believe a good doctor, properly focused, will outperform an AI. AI are also still prone to hallucinations, which is extremely bad in medicine. Where they win is against a tired, overworked doctor with too much on his plate.

      Where it is useful is as a supplement. An AI can put a lot of seemingly innocuous information together to spot more unusual problems. Rarer conditions can be missed, particularly if they share symptoms with more common problems. An AI that can flag possibilities for the doctor to investigate would be extremely useful.

      An AI diagnostic system is a tool for doctors to use, not a replacement.

  • @[email protected]
    link
    fedilink
    English
    -2
    edit-2
    2 months ago

    I don’t really know how it’s better a human denying you a kidney rather than a AI.

    It’s not like it’s something that makes more or less kidneys available for transplant anyway.

    Terrible example.

    It would have been better to make an example out of some other treatment that does not depend on finite recourses but only in money. Still, a human is now rejecting your needed treatments without the need of an AI, but at least it would make some sense.

    In the end, as always, people who has chosen the AI as the “enemy” have not understand anything about the current state of society and how things work. Another example of how picking the wrong fights is a path to failure.

    • @[email protected]
      link
      fedilink
      English
      32 months ago

      Responsibility. We’ve yet to decide as a society how we want to handle who is held responsible when the AI messes up and people get hurt.

      You’ll start to see AI being used as a defense of plausible deniability as people continue to shirk their responsibilities. Instead of dealing with the tough questions, we’ll lean more and more on these systems to make it feel like it’s outside our control so there’s less guilt. And under the current system, it’ll most certainly be weaponized by some groups to indirectly hurt others.

      “Pay no attention to that man behind the curtain”

      • @[email protected]
        link
        fedilink
        English
        02 months ago

        Autocorrect what the fuck? Models inherently conservative, wtf?

        You show a vast lack of knowledge. Probably your source of information is just propaganda.

        I know it’s an easy fight to pick. A trending dogma which is easy to support. You don’t really need to think, you just got pointed an easy enemy that’s easy to identify, and that’s easy to just be against and you follow that.

        But the true enemy is not there.

        Your heart is probably in the good place. But if you waste your strength fighting something useless is an incredible wasted of resources and spirit. You’ll achieve nothing, while the true enemy (which are human beings that doesn’t care about AI being a success or not) will keep laughing at you.

        They have been oppressing you since before electricity. If you think AI is a tool needed for oppression you are deeply wrong.

          • @[email protected]
            link
            fedilink
            English
            0
            edit-2
            2 months ago

            If you talk like that no one is going to want to talk with you.

            What the hell did you just write, accusing me of antisemitism?

            It’s really hard to even understand what you are talking about, really.

            The sacrifices? The slaughter? The jews? The nobility? Shilling? Pogroms? Roblox Minecraft and graphics cards? A supposedly academic level knowledge of LLM but calling them autocorrect?

            I’m not going to follow this conversation. That’s just my decision right now.