this article is incredibly long and rambly, but please enjoy as this asshole struggles to select random items from an array in presumably Javascript for what sounds like a basic crossword app:

At one point, we wanted a command that would print a hundred random lines from a dictionary file. I thought about the problem for a few minutes, and, when thinking failed, tried Googling. I made some false starts using what I could gather, and while I did my thing—programming—Ben told GPT-4 what he wanted and got code that ran perfectly.

Fine: commands like those are notoriously fussy, and everybody looks them up anyway.

ah, the NP-complete problem of just fucking pulling the file into memory (there’s no way this clown was burning a rainforest asking ChatGPT for a memory-optimized way to do this), selecting a random item between 0 and the areay’s length minus 1, and maybe storing that index in a second array if you want to guarantee uniqueness. there’s definitely not literally thousands of libraries for this if you seriously can’t figure it out yourself, hackerman

I returned to the crossword project. Our puzzle generator printed its output in an ugly text format, with lines like "s""c""a""r""*""k""u""n""i""s""*" "a""r""e""a". I wanted to turn output like that into a pretty Web page that allowed me to explore the words in the grid, showing scoring information at a glance. But I knew the task would be tricky: each letter had to be tagged with the words it belonged to, both the across and the down. This was a detailed problem, one that could easily consume the better part of an evening.

fuck it’s convenient that every example this chucklefuck gives of ChatGPT helping is for incredibly well-treaded toy and example code. wonder why that is? (check out the author’s other articles for a hint)

I thought that my brother was a hacker. Like many programmers, I dreamed of breaking into and controlling remote systems. The point wasn’t to cause mayhem—it was to find hidden places and learn hidden things. “My crime is that of curiosity,” goes “The Hacker’s Manifesto,” written in 1986 by Loyd Blankenship. My favorite scene from the 1995 movie “Hackers” is

most of this article is this type of fluffy cringe, almost like it’s written by a shitty advertiser trying and failing to pass themselves off as a relatable techy

  • self@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    if the capitalists succeed in their omnipresent goal to vastly reduce the perceived value of your labor, you can always write terrible code that kills in one of the most tedious languages ever invented

    do these ideas give you comfort

    • thesmokingman@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      10 months ago

      Do you have anything to else to offer or is your solution to roll over and do nothing? Some of us still have families and networks to support so we can’t just devote all our time to sniping labor on the internet in preparation for the glorious revolution. Given the discussions you have on your instance, I’m kinda disappointed this tepid response is the best you have.

    • datarama@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      This is the point where the anxiety patient has to make a rambling reality check.

      It’s obvious they want mass-layoff-as-a-service. They openly say so themselves. But it’s less obvious, at least at this point in time, that generative AI (at least in models like the current ones) actually can create that. I’m worried because I extrapolate from current trends - but my worries are pretty likely to be wrong. I’m too good at worrying for my own good, and at this point, mass layoffs and immiseration are still involuntary speculative fiction. In general, when transformative technologies have come along, people have worried about all the wrong things - people worried that computers would make people debilitatingly bad at math, not that computers would eventually enable surveillance capitalism.

      We’re currently in the middle of an AI bubble. There are companies that have enormous valuations despite not even having a product, and enormous amounts of resources are being poured into systems that nobody at present knows how to make money from. The legal standing of the major industry players is still unestablished, and world-leading experts disagree about what these models can realistically be expected to do and what they can’t. The hype itself is almost certainly part of a deliberate strategy: When ChatGPT landed a year ago, OpenAI had already finished training GPT-4 (which begun a long time prior). When they released that, it looked like they leapt from GPT-3 to GPT-4 in a few months. The image input capability that came out a few months ago were in the original GPT-4 model (according to their publication at the time); they just disabled it until recently. All of this has been very good at keeping the hype bubble inflated, which has both had the effect of getting investors (and other tech companies) to pour money into the project and making a lot of people really worried for their livelihoods. I freak out whenever I see a flashy demo showing that a LLM can solve some problem that no developer actually needs to use their brain for solving, because freaking out is unfortunately what comes naturally to me when the stakes are high.

      I don’t think this is like the crypto bubble. Unlike crypto, people are using LLMs and diffusion models to produce things, ranging from sometimes-useful code and “good enough” illustrations for websites, to spam, homework assignments and cover letters, to nonconsensual deepfake porn and phishing. We now have an infinite bullshit machine, and lots of what people do at work involve producing and managing bullshit. But it’s not all bullshit. A couple months ago, the “jagged frontier” paper gave some examples of tasks for management consultants, with and without LLM assistance. Unsurprisingly, writing fluffy and eloquent memos was much more productive with an LLM in tow, but complex analytical tasks actually saw some of the consultants get less productive than the control group. In my own attempts to use them in programming, my tentative conclusion is that at the moment they help to some extent when the stumbling block is about knowledge, but not really much when it’s about reasoning or skill. And more crucially, it seems that an LLM without a human holding its hand isn’t very good at programming (see the abysmal issue resolution rate for Github issues in the SWE-Bench paper). At the moment, they’re code generators rather than automatic programmers, and no programmer I know works as a code generator. Crucially, not a single one of them (who doesn’t also struggle with anxiety) worries about losing their jobs to LLMs - especially the ones who regularly use them.

      A while ago, I read a blog post by Laurence Tratt, in which he mentions that he gets lots of productivity out of LLMs when he needs a quick piece of Javascript for some web work (something he doesn’t work with daily), but very little for his day job in programming language implementation. This, it seems to me, likely isn’t because programming language implementation is harder than web dev or because there’s not enough programming language implementation literature in the training set (there’s a lot of it, judging by how much PLT trivia even small models can spit out) - it’s because someone like him has high ambitions when working with programming language implementation, and he knows so much about it that the things he doesn’t know are things the LLM also doesn’t know.

      I don’t know if my worries are reasonable. I’m the sort of person who often worries unreasonably, and I’ve never felt as uncertain about the future of my field as I do at the moment. The one thing I’m absolutely sure of is that there’s no future in which I write code for the US military, though.

      • self@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        When ChatGPT landed a year ago, OpenAI had already finished training GPT-4 (which begun a long time prior). When they released that, it looked like they leapt from GPT-3 to GPT-4 in a few months. The image input capability that came out a few months ago were in the original GPT-4 model (according to their publication at the time); they just disabled it until recently. All of this has been very good at keeping the hype bubble inflated, which has both had the effect of getting investors (and other tech companies) to pour money into the project and making a lot of people really worried for their livelihoods.

        this is an excellent point I haven’t seen before, and it clarifies a lot of the oddness around how these things have been deployed. it’s marketing-oriented development, and it’s being used to paper over the severe limitations in these models

        I don’t know if my worries are reasonable. I’m the sort of person who often worries unreasonably, and I’ve never felt as uncertain about the future of my field as I do at the moment. The one thing I’m absolutely sure of is that there’s no future in which I write code for the US military, though.

        that’s mostly where I’m at too, though I feel like my anxiety has enough reason behind it that I want to actively do something about it — usually something to do with this instance, though a lot of my recent projects (learning a lot more about hardware design languages, planning a community for folks to share their open source work) are deeply influenced by that anxiety too