Article from The Atlantic, archive link: https://archive.ph/Vqjpr

Some important quotes:

The tensions boiled over at the top. As Altman and OpenAI President Greg Brockman encouraged more commercialization, the company’s chief scientist, Ilya Sutskever, grew more concerned about whether OpenAI was upholding the governing nonprofit’s mission to create beneficial AGI.

The release of GPT-4 also frustrated the alignment team, which was focused on further-upstream AI-safety challenges, such as developing various techniques to get the model to follow user instructions and prevent it from spewing toxic speech or “hallucinating”—confidently presenting misinformation as fact. Many members of the team, including a growing contingent fearful of the existential risk of more-advanced AI models, felt uncomfortable with how quickly GPT-4 had been launched and integrated widely into other products. They believed that the AI safety work they had done was insufficient.

Employees from an already small trust-and-safety staff were reassigned from other abuse areas to focus on this issue. Under the increasing strain, some employees struggled with mental-health issues. Communication was poor. Co-workers would find out that colleagues had been fired only after noticing them disappear on Slack.

Summary: Tech bros want money, tech bros want speed, tech bros want products.

Scientists want safety, researchers want to research…

  • tal@lemmy.today
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    Many members of the team, including a growing contingent fearful of the existential risk of more-advanced AI models, felt uncomfortable with how quickly GPT-4 had been launched and integrated widely into other products.

    GPT-4 and anything similar isn’t going to pose an existential threat to humanity.

    Eventually, yeah, there is probably a possibility of existential risk from AI. I don’t know where that line ultimately is, and getting an idea of that might be something important for humanity to figure out, but I am pretty confident that whatever OpenAI is presently doing isn’t it.

    Same reason that Musk and his six month moratorium on AI work doesn’t make much sense. We’re not six months away from an existential threat to humanity.

    I think that funding efforts to have people in the field working on the Friendly AI problem is a good idea. But that’s another story.

    • jcarax@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      10 months ago

      I’m much more worried about the social implications. Namely, the displacement of workers and introduction of new efficiencies to workflows, continuing to benefit only those who are rich and in power, and driving more of us towards poverty.

      It’s not an immediate existential threat, but it’s absolutely a serious issue that we aren’t paying enough attention to.

      • cosmic_slate@dmv.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        Displacement of workers isn’t necessarily a bad thing as long as it’s spread out over a long enough time for people to adjust.

        I suspect(/hope) we’re not going to see people losing jobs, but rather jobs in certain industries will just be created at a slower rate. Workflows take a long time to change in larger companies. I suspect a lot of value will be realized by smaller/just-starting companies who could more easily afford, say a $500/mo AI “task helper” service vs. hiring a $60k/yr position.

        • jcarax@beehaw.org
          link
          fedilink
          arrow-up
          0
          ·
          10 months ago

          How did the industrial and information revolutions work out for us? Sure we live lives of convenience, but our entire existences have been manipulated into making the rich richer.

          Looking at long and short term trends in the wealth gap, I have absolutely no faith that this will go well.

          • cosmic_slate@dmv.social
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 months ago

            I agree the wealthy only became more wealthy during these revolutions, but the average standard of living for the lower classes also increased as well for both movements.

            For example, with the Industrial Revolution, newly created industrial jobs led to generally increased pay over rural jobs, improved transportation access, and started a focus on education.

            That isn’t to say workers weren’t abused in this system, though.

            I don’t think the wealth inequality problem is something that will get better or worse with an “AI Revolution”. There are plenty of jobs available to keep wages where they are. This could only be solved with tremendous government action or an incredible accident.

        • sculd@beehaw.orgOP
          link
          fedilink
          arrow-up
          0
          ·
          10 months ago

          You do realize that a lot of people are already being displaced by AI right? These are not “unskilled” jobs either. For e.g. the illustrators who used to get jobs probably spent thousands of hours to get to that level

          AI is already taking video game illustrators’ jobs in China

          https://restofworld.org/2023/ai-image-china-video-game-layoffs/

          CNET used AI to write articles. It was a journalistic disaster. - The Washington Post

          https://www.washingtonpost.com/media/2023/01/17/cnet-ai-articles-journalism-corrections/

    • Quasari@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      The apps using GPT4 without regards to safety can be though. Example: replacing human with chatbot for suicide prevention.

      • tal@lemmy.today
        link
        fedilink
        arrow-up
        0
        ·
        10 months ago

        Being an existential threat is a much higher bar – that’s where humanity’s continued existence is at threat.

        There are plenty of technologies that you could hypothetically put somewhere where a life might be at stake, but very few that could put humanity’s existence on the line.

        • brothershamus@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          10 months ago

          It’s the same situation, just writ large. Dumb human decisions to put AI where it shouldn’t be. Heck, you can put it in charge of the nuclear missles now if you want to. Don’t. Though. That’d be really, really stupid.

          Part of my knee-jerk dislike of the AI hype is that it’s glorified text completion. It doesn’t know shit. It only knows the % chance of your saying the next word. AGI is not happening anytime soon and all this is techbro theatre for the sake of money.

          Anyone who reads a wall of bland generated text and thinks we’re about to talk to god is seriously mistaken.