• blakestacey@awful.systemsM
      link
      fedilink
      English
      arrow-up
      6
      ·
      10 months ago

      I couldn’t tell what you were saying. Answering “yes” to the question of whether you were writing “in sarcasm or support” is not at all informative.

      • Hanabie@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        10 months ago

        It means it was meant to be sarcastic and supportive. I thought answering “yes” to an “or” question is familiar to most people on the internet these days.

        • Shitgenstein1@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          10 months ago

          tbh, one thing I’m tired of from the internet, is exactly the post-ironic conflation of “just kinding but also sincerely” as wit. It’s actually quite old (mid-late 2000’s) and frequently a vehicle for the most vile views on the internet. And it’s not clever.

    • GorillasAreForEating@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      4
      ·
      edit-2
      10 months ago

      I hate to say it, but even sneerclub can get a bit biased and tribal sometimes. He who fights with monsters and so on

      I suspect watching the rationalists as they bloviate and hype themselves up and repeatedly fail for years on end have lulled people into thinking that they can’t do anything right, but I think that’s clearly not the case anymore. Despite all the cringe and questionable ethics, OpenAI has made a real and important accomplishment.

      They’re in the big leagues now. We should not underestimate the enemy.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        10 months ago

        (this gets dangerously close to the debate rule, so I’ll leave it to mods to draw the line in reply to this)

        What, specifically, are you referencing as the accomplishment? Money? Access to power? Because while I’d agree on those things, it still isn’t really all that notable - that’s been the SFBA dynamic for years. It is why the internet was for years so full of utterly worthless companies, whose only claim of our awareness of them was built on being able to spend their way there.

        For openai, the money: wasn’t free, still short, already problematic. I’ve seen enough of those going around, from the insides, to say fairly comfortably that I suspect the rosy veneer they present is as thorough as an oldschool film propfront.

        The power? Well, leveraged and lent power, enabled by specific people… and, arguably, now curtailed - because he tried to assert his own views against that power. Because he tried to bite the hand that feeds, and he nearly had all his toys taken away

        A team? Eh, lots of people who’ve built teams. A company? Same. Something of a product? Same. None of these elevate him to genius.

        Do I think the man is in, in some manner, intelligent? Yes. In some particular domains he’s arguably one of the luminaries of his field (or, in a most extremely dark other possibility, an extremely good thief). I might be able to accept “genius” for this latter definition under some measure of proof, if that were the substantive point of argument. But: it is not.

        There is no proof that anything openai has produced is anywhere near their claims. Every visible aspect is grifty, with notable boasts that again and again (so far) fall flat (arguably because the motivations for these boasts are done in self-serving interest).

        As to “underestimating the enemy”: I hope the above demonstrates to you that I do not, and think of this fairly comprehensively. Which is why I can tell you this quite certainly: mocking the promptfans and calling them names for their extremely overcomplicated mechanical turk remains one of the best strategies available for handling these ego-fucking buffoon nerds and all their little fans

        • datarama@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          10 months ago

          GPT-4 is a technical accomplishment. I think it’s ridiculous to even entertain the notion that it might be getting “sentient”, and I’m not at all convinced that there is any way from advanced autocomplete to the superintelligence that will kill all the infidels and upload the true believers into digital heaven.

          You could (correctly) point out that all the heavy lifting wrt. developing the transformer architecture had already been done by Google, and OpenAI’s big innovation was “everything on the internet is the training set” (meaning that it’s going to be very difficult to make a test set that isn’t full of things that look a lot like the training set - virtually guaranteeing impressive performance on human exam questions) and securing enough funding to make that feasible. I’ve said elsewhere that LLMs are as much (or more) an accomplishment in Big Data as they are one of AI … but at this point in time, those two fields are largely one and the same, anyway.

          Prior to LLMs (and specifically OpenAI’s large commercial models), we didn’t have a software system that could both write poetry, generate code, explain code, answer zoology questions, rewrite arbitrary texts in arbitrary other styles, invent science fiction scenarios, explore alternate history, simulate Linux terminals of fictional people, and play chess. It’s not very good at most of what it does (it doesn’t write good poetry, a lot of its code is buggy, it provides lethal dietary advice for birds, its fiction is formulaic, etc.) - but the sheer generality of the system, and the fact that it can be interacted with using natural language, are things we didn’t have before.

          There is certainly some mechanical turking going on behind the scenes (“viral GPT fails” tend to get prodded out of it very quickly!), but it can’t all be mechanical turking - it would not be humanly possible for a human being to read and answer arbitrary questions about a 200-page novel as quickly as GPT-4-Turbo (or Claude) does it, or to blam out task-specific Python scripts as quickly as GPT-4 with Code Interpreter does it.

          I’m all for making fun of promptfans and robot cultists, but I also don’t think these systems are the useless toys they were a few years ago.

          How much of this is Sutskever’s work? I don’t know. But @GorillasAreForEating was talking about OpenAI, not just him.

          (if this is violating the debate rule, my apologies.)

          • earthquake@lemm.ee
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            10 months ago

            It’s not very good at most of what it does

            don’t think these systems are the useless toys

            It’s excellent at what it does, which is create immense reams of spam, make the internet worse in profitable ways, and generate at scale barely sufficient excuses to lay off workers. Any other use case, as far as I’ve seen, remains firmly at the toy level.

            But @GorillasAreForEating was talking about OpenAI, not just him.

            Taking a step back… this is far removed from the point of origin: @Hanabie claims Sutskever specifically is “allowed to be weird” because he’s a genius. If we move the goalposts back to where they started, it becomes clear it’s not accurate to categorise the pushback as “OpenAI has no technical accomplishments”.

            I ask that you continue to mock rationalists who style themselves the High Poobah of Shape Rotators, chanting about turning the spam vortex into a machine God, and also mock anyone who says it’s OK for them to act this way because they have a gigantic IQ. Even if the spam vortex is impressive on a technical level!

            • datarama@awful.systems
              link
              fedilink
              English
              arrow-up
              4
              ·
              10 months ago

              It’s excellent at what it does, which is create immense reams of spam, make the internet worse in profitable ways, and generate at scale barely sufficient excuses to lay off workers. Any other use case, as far as I’ve seen, remains firmly at the toy level.

              I agree! What I meant about not being very good at what it does is that it writes poetry - but it’s bad poetry. It generates code - but it’s full of bugs. It answers questions about what to feed a pet bird - but its answer is as likely as not to kill your poor non-stochastic parrot. This, obviously, is exactly what you need for a limitless-spam-machine. Alan Blackwell - among many others - has pointed out that LLMs are best viewed as automated bullshit generators. But the implications of a large-scale bullshit generator are exactly what you describe: It can flood the remainder of the useful internet with crap, and be used as an excuse to displace labour (the latter being because while not all jobs are “bullshit jobs”, a lot of jobs involve a number of bullshit tasks).

              I ask that you continue to mock rationalists who style themselves the High Poobah of Shape Rotators, chanting about turning the spam vortex into a machine God, and also mock anyone who says it’s OK for them to act this way because they have a gigantic IQ.

              Obviously.

              I’ve said this before: I’m not at all worried about the robot cultists creating a machine god (or screwing up and accidentally creating a machine satan instead), I’m worried about the collateral damage from billions of corporate dollars propping up labs full of robot cultists who think they’re creating machine gods. And unfortunately, GPT and its ilk has upped the ante on that collateral damage compared to when the cultists were just sitting around making DOTA-playing bots.

            • GorillasAreForEating@awful.systemsOP
              link
              fedilink
              English
              arrow-up
              4
              ·
              10 months ago

              I suppose the goalpost shifting is my fault, the original comment was about Sutskever but I shifted talking about OpenAI in general, in part because I don’t really know to what extent Sutskever is individually responsible for OpenAI’s tech.

              also mock anyone who says it’s OK for them to act this way because they have a gigantic IQ.

              I think people are missing the irony in that comment.

              • datarama@awful.systems
                link
                fedilink
                English
                arrow-up
                3
                ·
                10 months ago

                Guilty as charged: I missed the irony in it.

                (I’m the sort of person, unfortunately, who often misses irony.)

              • earthquake@lemm.ee
                link
                fedilink
                English
                arrow-up
                2
                ·
                10 months ago

                I’m still not convinced Hanabie was being ironic, but if so missing the satire is a core tradition of Sneer Club that I am keeping alive for future generations.

                • GorillasAreForEating@awful.systemsOP
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  10 months ago

                  I think there’s a non-ironic element too. Sutskever can be both genuinely smart and weird cultist; just because someone is smart in one domain doesn’t mean they aren’t immensely foolish in others.

            • datarama@awful.systems
              link
              fedilink
              English
              arrow-up
              4
              ·
              10 months ago

              Sure. What I mean by “generality” is that it can be used for many substantially different tasks - it turns out that there are many tasks that can be approached (though - in this case - mostly pretty poorly) just by predicting text. I don’t mean it in the sense of “general intelligence”, which I don’t know how to meaningfully define (and I’m skeptical it even constitutes a meaningful concept).

              In my opinion, this ultimately says more about the role of text in our society than it does about the “AI” itself, though. If a lot of interaction between humans and various social and technical systems are done using text, then there will be many things a text-predictor can do.

        • GorillasAreForEating@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          10 months ago

          The accomplishment I’m referring to is creating GPT/DALL-E. Yes, it’s overhyped, unreliable, arguably unethical and probably financially unsustainable, but when I do my best to ignore the narratives and drama surrounding it and just try out the damn thing for myself I find that I’m still impressed with it as a technical feat. At the very, very least I think it’s a plausible competitor to google translate for the languages I’ve tried, and I have to admit I’ve found it to be actually useful when writing regular expressions and a few other minor programming tasks.

          In all my years of sneering at Yud and his minions I didn’t think their fascination with AI would amount to anything more than verbose blogposts and self-published research papers. I simply did not expect that the rationalists would build an actual, usable AI instead of merely talking about hypothetical AIs and pocketing the donor money, and it is in this context that I say I underestimated the enemy.

          With regards to “mocking the promptfans and calling them names”: I do think that ridicule can be a powerful weapon, but I don’t think it will work well if we overestimate the actual shortcomings of the technology. And frankly sneerclub as it exists today is more about entertainment than actually serving as a counter to the rationalist movement.

          • datarama@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            10 months ago

            The problem here is that “AI” is a moving target, and what “building an actual, usable AI” looks like is too. Back when OpenAI was demoing DOTA-playing bots, they were also building actual, usable AIs.

            When I was in university a very long time ago, our AI professor went with a definition I’ve kept with me ever since: An “AI system” is a system performing a task at the very edge of what we’d thought computers were capable of until then. Chess-playing and pathfinding used to be “AI”, now they’re just “algorithms”. At the moment, natural language processing and image generation are “AI”. If we take a more restrictive definition and define “AI” as “machine-learning” (and tossing out nearly the entire field from 1960 to about 2000), then we’ve had very sophisticated AI systems for a decade and a half - the scariest examples being the recommender systems deployed by the consumer surveillance industry. IBM Watson (remember that very brief hype cycle?) was winning Jeopardy contests and providing medical diagnoses in the early 2010s, and image classifiers progressed from fun parlor tricks to horrific surveillance technology.

            The big difference, and what makes it feel very different now, is in my opinion largely that GPT much more closely matches our cultural mythology of what an “AI” is: A system you can converse with in natural language, just like HAL-9000 or the computers from Star Trek. But using these systems for a while pretty quickly reveals that they’re not quite what they look like: They’re not digital minds with sophisticated world models, they’re text generators. It turns out, however, that quite a lot of economically useful work can be wrung out of “good enough” text generators (which is perhaps less surprising if you consider how much any human society relies on storytelling and juggling around socially useful fictions). This is of course why capital is so interested and why enormous sums of money are flowing in: GPT is shaped as a universal intellectual-labour devaluator. I bet Satya Nadella is much more interested in “mass layoff as a service” than he is in fantasies about Skynet.

            Second, unlike earlier hype cycles, OpenAI made GPT-3.5 onwards available to the general public with a friendly UI. This time, it’s not just a bunch of Silicon Valley weirdos and other nerds interacting with the tech - it’s your boss, your mother, your colleagues. We’ve all been primed by the aforementioned cultural mythology, so now everybody is looking at something that resembles a predecessor of HAL-9000, Star Trek computers and Skynet - so now you have otherwise normal people worrying about the things that were previously only the domain of aforementioned Silicon Valley weirdos.

            Roko’s Basilisk is as ridiculous a concept as it ever was, though.

            • GorillasAreForEating@awful.systemsOP
              link
              fedilink
              English
              arrow-up
              3
              ·
              10 months ago

              The problem here is that “AI” is a moving target, and what “building an actual, usable AI” looks like is too. Back when OpenAI was demoing DOTA-playing bots, they were also building actual, usable AIs.

              For some context: prior to the release of chatGPT I didn’t realize that OpenAI had personnel affiliated with the rationalist movement (Altman, Sutskever, maybe others?), so I didn’t make the association, and i didn’t really know about anything OpenAI did prior to GPT-2 or so.

              So, prior to chatGPT the only “rationalist” AI research I was aware of were the non-peer reviewed (and often self-published) theoretical papers that Yud and MIRI put out, plus the work of a few ancillary startups that seemed to go nowhere.

              The rationalists seemed to be all talk and no action, so really I was surprised that a rationalist-affiliated organization had any marketable software product at all, “AI” or not.

              and FWIW I was taught a different definition of AI when I was in college, but it seems like it’s one of those terms that gets defined different ways by different people.

              • datarama@awful.systems
                link
                fedilink
                English
                arrow-up
                6
                ·
                10 months ago

                My old prof was being slightly tongue-in-cheek, obviously. But only slightly: He’d been active in the field since back when it looked like Lisp machines were poised to take over the world, neural nets looked like they’d never amount to much, and all we’d need to get to real thinking machines was hiring lots of philosophers to write symbolic logic descriptions of common-sense tasks. He’d seen exciting AI turn into boring algorithms many, many times - and many more “almost there now!” approaches that turned out to lead to nowhere in particular.

                He retired years ago, but I know he still keeps himself updated. I should write him a mail and ask if he has any thoughts about what’s currently going on in the field.