this article is incredibly long and rambly, but please enjoy as this asshole struggles to select random items from an array in presumably Javascript for what sounds like a basic crossword app:

At one point, we wanted a command that would print a hundred random lines from a dictionary file. I thought about the problem for a few minutes, and, when thinking failed, tried Googling. I made some false starts using what I could gather, and while I did my thing—programming—Ben told GPT-4 what he wanted and got code that ran perfectly.

Fine: commands like those are notoriously fussy, and everybody looks them up anyway.

ah, the NP-complete problem of just fucking pulling the file into memory (there’s no way this clown was burning a rainforest asking ChatGPT for a memory-optimized way to do this), selecting a random item between 0 and the areay’s length minus 1, and maybe storing that index in a second array if you want to guarantee uniqueness. there’s definitely not literally thousands of libraries for this if you seriously can’t figure it out yourself, hackerman

I returned to the crossword project. Our puzzle generator printed its output in an ugly text format, with lines like "s""c""a""r""*""k""u""n""i""s""*" "a""r""e""a". I wanted to turn output like that into a pretty Web page that allowed me to explore the words in the grid, showing scoring information at a glance. But I knew the task would be tricky: each letter had to be tagged with the words it belonged to, both the across and the down. This was a detailed problem, one that could easily consume the better part of an evening.

fuck it’s convenient that every example this chucklefuck gives of ChatGPT helping is for incredibly well-treaded toy and example code. wonder why that is? (check out the author’s other articles for a hint)

I thought that my brother was a hacker. Like many programmers, I dreamed of breaking into and controlling remote systems. The point wasn’t to cause mayhem—it was to find hidden places and learn hidden things. “My crime is that of curiosity,” goes “The Hacker’s Manifesto,” written in 1986 by Loyd Blankenship. My favorite scene from the 1995 movie “Hackers” is

most of this article is this type of fluffy cringe, almost like it’s written by a shitty advertiser trying and failing to pass themselves off as a relatable techy

  • self@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    I help maintain an open-source OS for industrial embedded applications.

    fuck yes. there’s something weirdly exciting about work like that — not only is it a unique set of constraints, but it’s very likely that an uncountable number of people (myself possibly included) have interacted with your code without ever knowing they did

    But the explicit purpose of generative AI is the devaluation of intellectual and creative labour, and right now, a lot of money is being spent on an attempt to make people like me redundant. Perhaps this is just my anxiety speaking, but it makes me terribly uneasy.

    absolitely same. I keep seeing other programmers uncritically fall for poorly written puff pieces like this and essentially do everything they can to replace themselves with an LLM, and the pit drops out of my stomach every time. I’ve never before seen someone misunderstand their own career and supposed expertise so thoroughly that they don’t understand that the only future in that direction is one where they’re doing a much more painful version of the same job (programming against cookie cutter LLM code) for much, much less pay. it’s the kind of goal that seems like it could only have been dreamed up by someone who’s never personally survived poverty, not to mention the damage LLM training is doing to the concept of releasing open source code or even just programming for yourself, since there’s nothing you can do to stop some asshole company from pilfering your code.

    • fnix@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      the only future in that direction is one where they’re doing a much more painful version of the same job (programming against cookie cutter LLM code) for much, much less pay.

      To the extent that LLMs actually make programming more “productive”, isn’t the situation analogous to the way the power loom was bad for skilled handweavers whilst making textiles more affordable for everyone else?

      I should perhaps say that I’m saying this as someone who is just starting out as a web developer (really chose the right time for that, hah). I try to avoid LLMs and even strictly unnecessary libraries for now because I like learning about how everything works under the hood and want to get an intimate grasp of what I’m doing, but I can also see that ultimately that’s not what people pay you for that and that once you’ve built up sufficient skill to quickly parse LLM output, the demands of the market may make using them unavoidable.

      To be honest, I feel as conflicted & anxious about it all as others already mentioned. Maybe I am just too green to fully understand the value that I would eventually bring, but can I really, in good conscience, say that a customer should pay me more when someone else can provide a similar product that’s “good enough” at a much lower price?

      Sorry for being another bummer. :(

      • self@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        I’m not sure the power loom analogy works, because power looms are (to my non-weaver knowledge) fit for purpose. if power looms’ output required significant rework by a skilled weaver (being paid significantly less for essentially the same amount of work done more tediously, per my point above), relied on stolen patterns from all of the world’s handweavers, and they were crushingly inefficient to run per woven piece, I seriously doubt history would remember them as a successful invention

        unfortunately, we’re living in uniquely awful times, and decades of tech’s strange, manipulated culture have turned many programmers into nihilistic utopians with no ability to think things through on a systemic level. generative AI as a whole is nothing but an underhanded wage reduction tactic, but (by design) our industry doesn’t have the solidarity to fight it in any way that works (see the Writers’ Guild’s successful strike)

        • swlabr@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 months ago

          Totally agree.

          IMO a better analogy would be clothing sweatshops rather than the power loom. Same utilitarian effect of textile affordability increases. Same ethical fuckery with exploitation of labour.

        • datarama@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 months ago

          The power loom analogy works very well, actually. Their spot in history is, in part, because of who got to write the history books.

          The inventors and entrepreneurs who developed them spent lots of time spying on weavers - who understandably weren’t cooperative, when they found out what the machines were intended to do. The quality of their products was so shoddy that the weavers’ first attempt at a legal challenge actually tried to have them defined as fraudulent, because they figured the poor-quality fabric would ruin the reputation of the English textile industry. In the early days, they actually did require frequent fix-up jobs.

          Not all of the entrepreneurs who built factories were monstrous assholes; some of them were quite considerate people who paid professional weavers a decent wage to work for them (these weavers still often hated their new working conditions). Some did this out of legitimate concern for their communities (it was a smaller world, and many of them personally knew the very people whose jobs they were degrading), and some did so because they were afraid that Luddites would break into their factories and destroy all the expensive machines. Most of them were put out of business, they were easy to undercut by owners who instead used indentured children taken from orphanages.

          They did drive the price of clothing down, but unfortunately that didn’t directly translate to all-around increased economic prosperity immediately: Aside from all the weavers being put out of business, entire communities suffered economic collapse because they were built around those weavers’ income.

          You’re right that programmers often have little class consciousness. I’m a union member myself (and so are most of my programmer friends and colleagues) - but unfortunately, I’m not sure how much some unions in a tiny country can do against the economic might of Silicon Valley.

          • self@awful.systemsOP
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 months ago

            huh, explained like that the power loom analogy does much better than I thought in encapsulating this anxiety; at its core, it’s a (very justified) fear that we haven’t learned anything from history and that the loudest and most foolish of our profession are gleefully marching us towards an awful fate

            I’ve been doing some reading on the origins of technolibertarianism (though as with all my reading I’m far behind where I’d like to be) and it’s fucking insane the lengths Silicon Valley has gone to in order to make unionization a taboo topic among American tech workers

      • datarama@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        I’m not going to claim to be an LLM expert; I’ve used them a bit to try to figure out which of my tasks they can and can’t help with. I don’t like them, so I don’t usually use them recreationally.

        I’ll put my stakes on the table too. I’ve been programming for very close to my entire life; my mum taught me to code on a Commodore 64 when I was a tiny kid. Now I’m middle-aged, and I’ve spent my entire professional life either making software or teaching software development and/or software-adjacent areas (maths, security, etc.). I’ve always preferred to call myself a “programmer” rather than a “software engineer” or the like - I do have a degree, but I’ve always considered myself a programmer first, and a teacher/researcher/whatever second.

        I think the point made in the article we’re talking about is both too soon and too late. It’s too soon because - for all my worries about what LLMs and other AI might eventually be, at the current moment they’re definitly not AutoDeveloper 3000. I’ve mentioned my personal experiences. Here is a benchmark of LLM performance on actual, real-world Github issues - they don’t do very well on those at all, at least for the time being. All professional programmers I personally know still program, and when they do use LLM’s, they use them to generate example code rather than to write their production code for them, basically like Stack Overflow, eexcept one you can trust even less than actual Stack Overflow. None of them use its generated code directly - also like you wouldn’t with Stack Overflow. At the moment, they’re tools only; they don’t do well autonomously.

        But the article is also too late, because the kind of programming I got hooked on and that became a lifelong passion isn’t really what professional development is like anymore, and hasn’t been for a long time, long before LLMs. I spend much more time maintaining crusty old code than writing novel, neat, greenfield code - and the kind of detective work that goes into maintaining a large codebase is often one that LLMs are of little use in. Sure, they can explain code - but I don’t need a tool to explain what code does (I can read), I need to know why the code is there. The answer to this question is rarely directly related to anything else in the code, it’s often due to a real-world consideration, an organizational factor, a weird interaction with hardware, or a workaround for an odd quirk of some other piece of software. I don’t spend my time coming up with elegant, neat algorithms and doing all the cool shit I dreamt of as a kid and learnt about at university - I spend most of my time doing code detective work, fighting idiosyncratic build systems, and dealing with all the infuriating edge cases the real world seems to have an infinite supply of (and that ML-based tools tend to struggle with). Also, I go to lots of meetings - many of which aren’t just the dumb corporate rituals we all love to hate, but a bunch of professionals getting together to discuss the best way to solve a problem none of us know exactly how to solve. The kind of programming I fell in love with isn’t something anyone would pay a professional to do anymore, and hasn’t been for a very long time.

        I haven’t been in web dev for over a decade. Most active web devs I know say that the impressive demos of GPT-4 making a HTML page from a napkin sketch would have been career-ending 15 years ago, but doesn’t even resemble what they spend all their time doing at work now: They tear their hair out over infuriating edge cases, they try to figure out why other people wrote specific bits of code, they fight uncooperative tooling and frameworks, they try to decipher vague and contradictory requirements, and they maintain large and complex applications written in an uncooperative language.

        The biggest direct influence LLMs have so far had on me is to completely destroy my enthusiasm for publishing my own (non-professional) code or articles about code on the web.

      • locallynonlinear@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        Commoditization is a real market force, and yes, it will come for this industry as it has for others.

        Personally, I think we need to be much, much more creative and open to understanding ourselves and the potential of the future. It’s hard to know specifics, but there is broad domains.

        Lately, I’ve been hacking at home with more hardware, and creating interesting low scale, low energy input systems that help me… garden. Analyzing soil samples, planning plots and low energy irrigation, etc, etc. It’s been fun because the work is less about programming in depth and more broad systems thinking. I even have ideas for making a small scale company off this. At that point, purely the programming won’t be the bottleneck.

        If it helps, as an engineer, take a step back and think about nature and how systems and niches within systems evolve. Nature isn’t actually in the business of replacing due to redundancy, it’s in the business of compounding dependency via waste resources, and the shifting roles as a result of that. We need to be ready to creatively take our experience, perspective, and energy gradient to new places. It’s no different for any other part of nature.

        • datarama@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 months ago

          I mean, we’ve been commoditizing our own skills for the entire duration of our profession: Libraries, higher-level languages, open-source. This is the nature of programming, really; we’d be bad at our jobs if we didn’t do that. Today’s afternoon hack would have taken an entire team several months of work a few decades ago, and many of the projects teams start today were unthinkable a few decades ago. This isn’t because we’re a ton better, it’s because a lot of the tough work has already been done.

          Historically, every major increase in programmer productivity has led to demand for software rising faster than the even-more-productive programmers could keep up with, though.

    • locallynonlinear@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      since there’s nothing you can do to stop some asshole company from pilfering your code.

      Currently. Though I think that there is a future where adversarial machine learning might be able to greatly increase the cost of training on pilfered data by encoding human generated inputs in a way that runs counter to training algorithms.

      https://glaze.cs.uchicago.edu/

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        Even if there were Glaze/Nightshade for computer programs, it could be reverse-engineered just like any other code obfuscation. This is the difference between code and most other outputs of labor: code is syntactic and formal, allowing for decidable objective analyses.

        • locallynonlinear@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 months ago

          There’s a difference between “can” and “cost”. Code is syntactic and formal, true, but what about pseudo code that is perfectly intelligible by a human? There is, afterall, a difference between sharing “compiled” code that is meant to be fed directly into a computer and sharing “conceptual” code that is meant to be contextualized into knowledge. Afterall, isn’t “code” just the formalization of language, with a different purpose and trade off?

        • datarama@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 months ago

          Well, some analyses are decidable, anyway. ;-)

          But you’re right, of course. The only real data poisoning you could do with code is sharing deliberately bad code … but then you’re also not sharing useful open source code with your fellow humans; you’re just spamming.

          At any rate, I’m not sure that future major gains in LLM coding ability is going to come from simply shoving more code in. The ones we have today have already ingested a substantial chunk of all the open-source code that exists on the public web, and (as the SWE-Bench example I’ve shared elsewhere gives an example of), they still struggle if they aren’t substantially guided by a human.