• Worx@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    If anyone is interested in the ethics of using AI, this is a Philosophy Tube video that I really enjoyed.

    https://youtu.be/AaU6tI2pb3M?si=JIG7JiCxfpnvdIpn

    I did go into the video with the background of enjoying Philosophy Tube videos and having a degree in AI, as well as a passion for the ethics and epistemology of AI so maybe not everyone will get as much from it as I did

    • kandoh@reddthat.com
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      Because DnD is a ‘lifestyle brand’ and it needs to operate within the boundaries of what it’s customers consider morally correct.

      • EssentialCoffee@midwest.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        The post for this is coming from the Magic the Gathering account. D&D and Magic aren’t the same. D&D had an AI art thing months ago.

        We do all dislike WotC though.

  • iegod@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    Too big of a stink is being made about this. AI isn’t the enemy. We should be embracing this.

    • TheBest@midwest.social
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      For you and I just hobbyists, AI is a godsend and I agree.

      In this professional setting it would appear much more genuine to consumers and truly creative if it was art without the use of generative AI to fill in the background detail.

  • CaptObvious@literature.cafe
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    They will likely soon discover, as everyone is, that the genie is out. Their only real choice is to learn how to live with it. That may be requiring AI generated material to be labeled and firing anyone who fails to do so. But simply banning it isn’t likely to work for very long.

    • Microw@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      No, no, they should fire that marketing guy who used AI because he is destroying human jobs! Like his own job that he now lost!

    • mrbubblesort@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      Agreed, it’s already starting to climb out of the uncanny valley and it will only continue to get better. Within another 5~10 years I doubt most people will be able to distinguish it from something a real person created at all.

      In this case though, I honestly wonder if anyone really cares if WotC or any other company uses it, considering any sane person avoids advertisements like the plague nowadays.

      • FreeFacts@sopuli.xyz
        link
        fedilink
        arrow-up
        0
        ·
        8 months ago

        Within another 5~10 years I doubt most people will be able to distinguish it from something a real person created at all.

        Maybe, or maybe not. The issue with machine learning software is that despite what they are marketed as, there is zero intelligence in them. They basically work via trial and error, and to know they have errored they must have references. And with an increasing number of generated content flooding the internet, differentiating the real reference material from generated material will be difficult, as in not cost effective. So we will end up with generated content teaching the model to generate content, and the progress we have seen will be effectively halted.

        • Spzi@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          there is zero intelligence in them. They basically work via trial and error

          The big philosophical question (with practical implications) is, wether our intelligence evolved differently.

      • Exocrinous@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        8 months ago

        The reasons include the fact that training data is often included without proper licence to use the work, which is plagiarism. I’m fine with little guys stealing from big corporations, but in this case, it’s big corporations profiting off this, and little guys are the ones who don’t have the resources to defend themselves.

          • Exocrinous@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 months ago

            Yes, I would be pro-AI in a communist society. I am pro-AI when AI is used by the proletariat.

        • Stumblinbear@pawb.social
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          8 months ago

          That’s not what plagiarism means. At the very worst it could be a copyright violation, but they’re not really distributing someone else’s work without permission. Licensing issue? Possibly

          • Exocrinous@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            8 months ago

            You’re confusing plagiarism for copyright infringement. Copyright infringement is what you’re describing. Technically, some of the most textbook severe cases of academic plagiarism don’t infringe copyright. Plagiarism is taking someone’s ideas without proper credit. In academic spaces, plagiarism is not usually a legal dispute, but instead a matter of integrity.

            These AI plagiarise by nature, because they are incapable of saying which of the data in their training database was used in the creation of each of their works.

  • caseyweederman@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    Their digital art standards have made their card art look so bland and samey for several years. It may as well have been AI generated all this time.
    Bring back Foglio! Bring back weird art!