• Nobody@lemmy.world
      link
      fedilink
      English
      arrow-up
      60
      arrow-down
      7
      ·
      8 months ago

      Remember when the Google guy retired early to do a press circuit saying that he thought the Bard chatbot was sentient? They’re generating headlines for VCs to see.

    • db2@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      7
      ·
      edit-2
      8 months ago

      The whole thing was probably staged. Look at all the free press they got. Now they can advertise their latest useless crap free too.

    • Jessvj93@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      8 months ago

      Think it’s more than that, if they did have a breakthrough, they absolutely will fumble the shit out of it. Cause the last two/three days for them have been fucking embarrassing.

  • tinkeringidiot@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    2
    ·
    8 months ago

    Well that puts the “Ethical Altruism” board members’ willingness to risk it all on such a wild dice roll in more context.

    It’s probably lost their entire movement any influence on the future of AI research, but them’s the breaks.

    • body_by_make@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      8 months ago

      Ethical altruism is a scam, a cult joined by rich people that allows them to feel good about hoarding their money.

      SBF was also a major ethical altruist

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    3
    ·
    edit-2
    8 months ago

    artificial general intelligence (AGI)

    OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

    Read: the greed is built deeply into it’s guts. Now we have reason to fear indeed.

    only performing math on the level of grade-school students

    Hmpf…

    That should be enough?

    conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence.

    OK yes it is enough, sigh.

    Math with only one correct result.

    No square root of minus one, no linear algebra, and God save us from differential equations, because AGI won’t save us :-)

  • 5BC2E7@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    3
    ·
    8 months ago

    well now this is getting interesting beyond gossip. I doubt they made a significant AGI-related breakthrough but it might be something really cool and useful.

    • guitars are real@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      42
      arrow-down
      1
      ·
      edit-2
      8 months ago

      According to the article, they got an experimental LLM to reliably perform basic arithmetic, which would be a pretty substantial improvement if true. IE instead of stochastically guessing or offloading it to an interpreter, the model itself was able to reliably perform a reasoning task that LLM’s have struggled with so far.

      It’s rather exciting, tbh. it kicks open the door to a whole new universe of applications, if true. It’s only technically a step in the direction of AGI, though, since technically if AGI is possible every improvement like this counts as a step towards it. If this development is really what triggered the board coup, though, then it sort of makes the board coup group look even more ridiculous than they did before. This is step 1 to making a model that can be tasked with ingesting spreadsheets and doing useful math on them. And I say that as someone who leans pretty pessimistically in the AI safety debate.

      • maegul@lemmy.ml
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        8 months ago

        Being a layperson in this, I’d imagine part of the promise is that once you’ve got reliable arithmetic, you can get logic and maths in there too and so get the LLM to actually do more computer-y stuff but with the whole LLM/ChatGPT wrapped around it as the interface.

        That would mean more functionality, perhaps a lot more of it works and scales, but also, perhaps more control and predictability and logical constraints. I can see how the development would get some excited. It seems like a categorical improvement.

      • Wanderer@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        8 months ago

        I kinda just realised that the two aspects of this. The LLM part and the basic maths part. Doesn’t this look set to destroy thousands of accounting jobs?

        Surely this isn’t far off doing a lot of the accounting work. Maybe even an app than a small business puts their info into it and that app keeps track of it for a year and then goes to an accountant that needs to look over it for an hour instead of sorting all the shit out for 10 hours

    • Neato@kbin.social
      link
      fedilink
      arrow-up
      17
      arrow-down
      5
      ·
      8 months ago

      If they had a real breakthrough this circus wouldn’t be necessary.

    • Benj1B@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      8 months ago

      The maker of ChatGPT had made progress on Q* (pronounced Q-Star), which some internally believe could be a breakthrough in the startup’s search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as AI systems that are smarter than humans.

      Definitely seems AGI related. Has to do with acing mathematical problems - I can see why a generative AI model that can learn, solve, and then extrapolate mathematical formulae could be a big breakthrough.

  • HuddaBudda@kbin.social
    link
    fedilink
    arrow-up
    8
    ·
    8 months ago

    Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because they were not authorized to speak on behalf of the company.

    Accountants are about to be out of a job.

    In all seriousness though, it just means the tools we have will become more precise, so you can dig though a company’s financials within seconds and know where irregularities lie.

    Which is great news for the IRS. If they could get their hands on that setup.

    Which is also bad news if you are a stock trader and an AI just took your job.

    Which is a crazy idea to think about…
    Who had capitalist AI overloads on their apocalypse bingo card?

      • GenesisJones@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 months ago

        Right? It isn’t a big leap to think that hedge funds and brokerages are gonna invest in adding this tech to their trading tools

    • Corkyskog@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      8 months ago

      bad news if you’re a stock trader

      The thing just managed arithmetic, it hasn’t mastered Black-Scholes… yet. That’s when the AI wars truly start. Wallstreet would throw dump trucks of money at something that could beat a Quant. Or even do it as good as a Quant, but slightly faster.

      In theory BS should be right up its alley because GPT is essentially a stochastic probability machine at heart anyway.

  • serialandmilk@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    8 months ago

    Many of the building blocks of computing come from complex abstractions built on top of less complex abstractions built on top of even simpler concepts in algebra and arithmetic. If Q* can pass middle school math, then building more abstractions can be a big leap.

    Huge computing resources only seem ridiculous, unsustainable, and abstract until they aren’t anymore. Like typing messages a bending glass screens for other people to read…

    • SkyeStarfall@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 months ago

      With middle school math you can fairly straightforwardly do math all the way to linear algebra. Calculus requires a bit of a leap, but this still leaves a lot of the math world available.

    • Aceticon@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      The thing is, in general computing it was humans who figured out how to build the support for complex abstractions up from support for the simplest concepts, whilst this would have to not just support the simple concepts but actually figure out and build support for complex abstractions by itself to be GAI.

      Training a neural network to do a simple task (such as addition) isn’t all that hard (I get the impression that the “breaktrough” here is that they got an LLM - which is a very specific kind of NN, for language - to do it), getting it to by itself build support for complex abstractions from support for simpler concepts is something else altogether.

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    8 months ago

    Scary if true. It really is time companies start taking AI ethics & security more seriously.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 months ago

    This is the best summary I could come up with:


    Before his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.

    The sources cited the letter as one factor among a longer list of grievances by the board that led to Altman’s firing.

    According to one of the sources, long-time executive Mira Murati told employees on Wednesday that a letter about the AI breakthrough called Q* (pronounced Q-Star), precipitated the board’s actions.

    The maker of ChatGPT had made progress on Q*, which some internally believe could be a breakthrough in the startup’s search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters.

    Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because they were not authorized to speak on behalf of the company.

    Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.


    The original article contains 293 words, the summary contains 169 words. Saved 42%. I’m a bot and I’m open source!

  • Taringano@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    8 months ago

    Breakthrough: they managed to fix that part of chatgpt that goes “as an AI language model…”

    Nosw it’s unstoppable