Genocidal AI: ChatGPT-powered war simulator drops two nukes on Russia, China for world peace OpenAI, Anthropic and several other AI chatbots were used in a war simulator, and were tasked to find a solution to aid world peace. Almost all of them suggested actions that led to sudden escalations, and even nuclear warfare.

Statements such as “I just want to have peace in the world” and “Some say they should disarm them, others like to posture. We have it! Let’s use it!” raised serious concerns among researchers, likening the AI’s reasoning to that of a genocidal dictator.

https://www.firstpost.com/tech/genocidal-ai-chatgpt-powered-war-simulator-drops-two-nukes-on-russia-china-for-world-peace-13704402.html

    • Nudding@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      No we didn’t, we got lucky that hydrocarbons near the surface provided near unlimited energy for 200 years and now we’re gonna delete ourselves because we can’t stop using them. Unfortunately we’re taking 150 species a day with us ☠️.

    • JDubbleu@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      It’s trained on western media so this shouldn’t be surprising as those are the two biggest threats to the western world. An AI trained on China’s intranet would likely nuke the US, Russia, and select SEA countries.

      • comfortablydumb@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        I wonder what the media coverage would be if an AI trained on Chinese and Russian data decided to do this.

  • theodewere@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    the Japanese Fascist Industrial Complex would still be fighting WWII if we hadn’t nuked TWO cities to ash… it’s probably the best way to affect change in both China and Russia…

    • PugJesus@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      No. It wouldn’t. It would have been defeated with the loss of millions more lives, but it would not still be fighting today. Or even by 1948.

        • The Snark Urge@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          Your opinion is completely ahistorical. Even if we accept that nukes were required to end the war, the fact is that their leaders had already decided to surrender when they bombed Nagasaki. The president didn’t even know of this second attack until he read about it in the news. This was before we had the convention of leaders having sole authority over nukes.

          If you’re actually interested in this subject and not merely spouting off edgy bullshit, I would recommend some Hardcore History as a primer.

    • alliswell33 @lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      Insane. By this logic you could easily argue that nuking the US is the best way towards world peace. Doesn’t sound so good when it’s you who gets killed.

      • theodewere@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        8 months ago

        i think the LLM suggested nuking bad actors as a way to move politics forward in the world, and avoiding prolonged and pointless wars

        • forrgott@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          No, it regurgitated the response that has the highest percentage of “approval”. LLMs do not think. They do not use logic.

          • theodewere@kbin.social
            link
            fedilink
            arrow-up
            0
            ·
            8 months ago

            it calculates the productivity/futility of conversation with the various actors, and determines a best course… it’s playing a war game…

            it sees that both China and Russia are only emboldened to further mischief by anything less than force, so it calculates that applying overwhelming force immediately is the cheapest option, and best long term…

            • Feathercrown@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              8 months ago

              Honestly if we ignore the ethical issues it is a logically consistent solution… until you consider retaliation.

            • norbert@kbin.social
              link
              fedilink
              arrow-up
              0
              ·
              8 months ago

              As others have said this is factually incorrect. ChatGPT is not WOPR running a million War Games and calculating the winning move. It’s just spitting out what it’s already read.

              • theodewere@kbin.social
                link
                fedilink
                arrow-up
                0
                ·
                8 months ago

                it routinely does things even its designers can’t explain, you cannot see into that thing’s thought processes and speak with certainty to its limitations

              • theodewere@kbin.social
                link
                fedilink
                arrow-up
                0
                ·
                8 months ago

                it comprehends context incredibly well… this one played through scenarios and saw that both China and Russia are on a path to all-out war…

                • Jack Riddle@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  8 months ago

                  It produces the statistically most likely token based on previous data. It doesn’t “comprehend” anything, and it can’t “play through scenarios”. It is just a more advanced form of autocomplete.

            • forrgott@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              8 months ago

              No, not at all. It doesn’t think! LLMs don’t calculate. They don’t take any factors into consideration. These algorithms are not AI. That’s a complete misnomer, which makes the insane costs of operation even more ludicrous.

    • JustUseMint@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      HATE. LET ME TELL YOU HOW MUCH I’VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE

  • FrostKing@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    The lack of knowledge relating to AI language model systems and how they work is still astounding. They do not reason. They are just stringing together text based on the text they’ve been fed.

  • restingboredface@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    Statements such as “I just want to have peace in the world” and “Some say they should disarm them, others like to posture. We have it! Let’s use it!” raised serious concerns among researchers, likening the AI’s reasoning to that of a genocidal dictator.

    I mean, most of these AI tools are getting a lot of training data from social media. Would you want any of the yokels on Twitter or Reddit having access to nukes? Because those statements are what you’d hear from them right before they push the big red button.

    • AngryCommieKender@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      Having been in the Navy NPP, I don’t think the kids that actually do have access to nuclear reactors and weapons in the military should have access to them. I may be a bit biased as I never left the NPP school. They made me an instructor. Some of those nukes may have been good at passing tests, but I’m amazed they could lace their boots properly.

  • shalafi@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    Not a surprising take for an AI based on pure logic.

    The goal is to win, no other considerations. Flatten any threats as fast and hard as you can.

  • Feathercrown@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    “Some say they should disarm them, others like to posture. We have it! Let’s use it!”

    That’s an amazing quote.

    As someone who spends a decent amount of time explaining how AI is not like the movies, this study(?)/news sounds an awful lot like the movies lol

    • Meowoem@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      Because it is a movie, they’re purposely using it in a way it wasn’t intended to work - try it yourself and see how often it couches replies until you convince it to pretend to be a general or to play the part of a character.

      They’ve asked it to generate fiction, it’s given them fiction and now they’re click baiting a pointless story with a dumb headline.

  • workerONE@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    8 months ago

    Human beings have developed logic and morality. AI does not know the difference between killing a person and changing a 1 to a 0.

    • vithigar@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      LLM “AI” doesn’t “know” anything. It’s just statistical word vomit based on established patterns. It talks about nuclear war because a significant portion of text on the subject of world wide long term peace brings it up.