• Sandra@idiomdrottning.org
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    1 year ago

    AI will be everywhere going forward. And that’s fine.

    The issue is more how it will be used.

    There are two other pretty big problems. One is that there’s a huge climate impact with runaway energy use, and the other is that it’s a very expensive means of production which leads to further concentration of wealth & power.

    • lloram239@feddit.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I don’t think energy use is a serious problem, that just seems to get thrown around just because it’s trendy. Does it even matter compared to gaming or crypto? It’s also an easily solved problem, just install more solar. Training the initial model isn’t time critical or depended on location, so there is a lot of flexibility here that you wouldn’t have in other applications. Meanwhile running the already trained model is very cheap, it’s literally the most efficient way to solve the problem. Trying to replicate what StableDiffusion is doing with a 3D renderer and you’d need to burn a heck of a lot more cycles, as well as hire a truckload of artists, which would all use substantially more energy.

      Basically, people are going to use AI when it makes better use of time/money/energy than the competition. Nobody is going to use AI to burn energy just for the fun of it, it has to improve on what we already have.

      As for the concentration of power and wealth, that can certainly happen to some degree, but I could also easily see that get balanced out by the amount of freedom that local models give. Right now I can generate subtitles for video with Whisper, generate voices with tortoise-tts, generate images with StableDiffusion as well as play around with LLMs on my local machine with OpenSource’ish models. Nobody controls what I do and I am not paying for anything. There are obviously still aspects that those models can’t do, local LLMs aren’t up to GPT-4, but already quite close to ChatGPT for some tasks, StableDiffusion isn’t quite as good as Midjourney for plain txt2img, but state-of-the-art in a lot of other aspects (custom training, ControlNet, LORA, etc.). But for a lot of tasks those models are already “good enough” and they are constantly getting better. Meanwhile ChatGPT or BingChat are so heavily censored that they flat out just don’t work for a lot of task, even seemingly simple things like summarizing movies (too much violence). Nobody even talks about DALL-E2 anymore, due to being surpassed by everything else out there.

      Now centralization can still happen, Google is sitting on more data than everybody and if they make some multi-modal model that is trained on it all, that could be a very potent offering. But for the time being at least, everything that was released was outclassed by another thing within a few months. Nothing in the AI space so far lasts very long and the fact that AI models can use other AI models to improve themselves, hopefully makes that continue for a while. With the censorship going on I also have a hard time seeing local models disappearing anytime soon, as so far none of the commercial offerings had the balls to just build a model that knows everything.

      • Sandra@idiomdrottning.org
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I don’t think energy use is a serious problem, that just seems to get thrown around just because it’s trendy. Does it even matter

        Yes, since it’s a rapidly growing field.

        compared to gaming or crypto?

        Proof-of-work based tokens are the enemy and not what we should be comparing things to.

        It’s also an easily solved problem, just install more solar.

        It’s a little trickier than that. Renewable doesn’t mean infinite; we still need to limit consumtion to sustainable rates. Also, there is the hardware in the rigs themselves. Solvents, wiring, metals, plastic…

        Training the initial model isn’t time critical or depended on location, so there is a lot of flexibility here that you wouldn’t have in other applications.

        That’s a good point. It’s less vulnerable to wind or light conditions.

        Meanwhile running the already trained model is very cheap, it’s literally the most efficient way to solve the problem.

        Yep. I never argued against that part. That’s great, as long as we can hold it together and not make new models every fifteen minutes just to keep up with the joneses, but there’s also a drawback to the “expensive to train, cheap to run” model: that’s the very thing that is driving the wealth concentration of big capital like Google.

        Basically, people are going to use AI when it makes better use of time/money/energy than the competition. Nobody is going to use AI to burn energy just for the fun of it, it has to improve on what we already have.

        That would be a perfect argument if we had accounted-for environmental transaction externalities, but we don’t. Using energy is cheaper than it “should” be to account for the environmental impact of that energy use. The old “if I sell you a can of gas, the price of the forest that got wrecked by that gas isn’t factored in” problem. Even otherwise laissez-faire stalwarts like Hayek acknowledged this.

        As for the concentration of power and wealth, that can certainly happen to some degree, but I could also easily see that get balanced out by the amount of freedom that local models give.

        Right; once it does get truly democratized with open source model we can have a post-scarcity pay-it-forward future where the step from dream to reality is smaller than ever before.

        We’ve been through backs and forths of this. The big data mainframe era was replaced by PC. Then that got centralized again in the age of big dialup. But then with broadband everyone could run a server. And then the web 2.0 debacle happened and we got a silo era where people voluntarily started using Google Search and Facebook Messenger and stuff like that to give big capital ownership of our platforms.

        You seem like you have your head on your shoulders (you’re on feddit, after all) but among the general population there’s a lack of awareness around these power&wealth-concentration issues.

        Now centralization can still happen, Google is sitting on more data than everybody and if they make some multi-modal model that is trained on it all, that could be a very potent offering.

        Yes, and I want a plan for that.

        Nothing in the AI space so far lasts very long

        Which is why we’re risking runaway energy use and climate impact.

        • lloram239@feddit.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          World population and our standard of living have improved drastically over those years too, we aren’t burning that additional energy for nothing.

          • Sandra@idiomdrottning.org
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Yes, the fossil economy has enabled society as a whole to create temporary wealth; the past has borrowed from the present. It’s going to be a rough comedown.

            We haven’t been, and still aren’t, commensurately accounting for our environmental externalities.

    • kmkz_ninja@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      5
      ·
      1 year ago

      expensive means of production which leads to further concentration of wealth & power.

      That’s only an issue if we continue this brigade of trying to protect artists at everyones expense. Getting enough data to make a usable LLM will be impossible for all but the big players.

      • Sandra@idiomdrottning.org
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        As a writer and painter, I’ve long been opposed to copyright and have been releasing stuff under Creative Commons licenses for over a decade. So don’t misinterpret me as agreeing with the brigade.

        Livelyhood for artists is important but so is a livelyhood for everyone, and I’ve been arguing against the flawed “copyright is good for artists” position for decades—we’ve been having this exact same fight against copyright since Napster or even the cassette era. Gates’ infamous “Open letter to hobbyists” was in 1976, and that hasn’t changed.

        There’s a lot of starving artists out there, and a lot of rich publishers. It’s difficult getting food, shelter, medicine and other resources to go around, down here on Earth.

        In a world already deprived by such scarcity, we’d be better off without the shackles of artifical scarcity that copyright introduces.

        I say all that as a lead in because I’m just about to absolutely disagree with part of the following:

        That’s only an issue if we continue this brigade of trying to protect artists at everyones expense.

        As I wrote above, I agree with you re the so-called brigade and have done so publicly in the past, too.

        The myth that IP is a good way to sustain artists’ lives economically is part of the same market capitalism bugged system that has led to the extreme wealth concentration (Google, Microsoft, Amazon) in the first place.

        But what you are replying to, what I wrote, has nothing to do with the pro-copyright stance. I wrote that it’s a very expensive means of production which leads to further concentration of wealth & power.

        Getting enough data to make a usable LLM will be impossible for all but the big players.

        Yeah, if LAION gets shut down. LAION is freely available. The data is not the problem. The resource, hardware, electricity, tensors, e-waste, cooling etc is. And I’m not saying startups and garage operations can’t get their hands on this kinda tech if they can profit from it, as we’ve seen in the proof-of-work “mining” debacle. It’s that since environmental externalities are under-accounted for, that’ll lead to climate-wrecking runaway resource use.

        I have a lot of sympathy for the artists on the other side who are protesting this with whatever futile li’l clogs in the cogs they’ve got; not because I think they’re right about who can learn from art, I disagree with them there, but because they’re a canary in the coal mine for how big capital can use automation to replace workers and how that’ll lead to an even bigger wealth gap (which is already at an historical high) and mass unemployment and economic desperation.

        As Amelia Earhart put it in 1935: “Obviously, research regarding technological unemployment is as vital today as further refinement or production of labor-saving and comfort-giving devices.” And we still haven’t figured that out. And they’re eating at artists, writers, programmers, game designers, economists, cooks, doctors, drivers, postal workers, psychologists—no one is safe. We need to figure out a way to distribute tasks and resources differently in a world where there’s a heck of a lot fewer tasks and a lot more digital resources (while physical resources like fuel and food and shelter are still limited). Politics is also going to get harder since money correlates withnpower, no matter how much we’ve been trying to fight that corruption.

        Markets use prices to distribute resources, and prices are set by supply and demand, and that started breaking down in the cassette and floppy disk age where making the initial recording was very expensive but making copiesnof that was cheap. Big capital has tried to patch the hole to their advantage at the expense of the public by introducing artificial scarcity in the form of an exclusive right to make copies, “copyright”.

        And now it’s getting twisted one more turn, since now the initial work itself is easy to make, but the models, the makers themselves, are wholly owned by big corporations like Microsoft and Google. Capitalism was bad before. It’s going to get cataclysmic now that the workers are wholly owned machines.

        @kmkz_ninja@lemmy.world @boardgames@feddit.de

        • Turun@feddit.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          For large language models you have a good point. The space is dominated by closed source company OpenAI, the open source ai models don’t come close. This is indeed a worrying development. The current models are simply really really expensive to run, so hobbyists can’t contribute in a meaningful way.

          But for image generation you basically only have stable diffusion and midjourney. And I’d argue stable diffusion is much more widely used due to the control it gives and it can easily be run on consumer hardware. Customizing a model is also possible and takes only a few hours on a modern gaming computer.