• FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      3,226 suspected images out of 5.8 billion. About 0.00006%. And probably mislabeled to boot, or it would have been caught earlier. I doubt it had any significant impact on the model’s capabilities.

    • wandermind@sopuli.xyz
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      I know. So to confirm, you’re saying that you’re okay with AI generated CSAM as long as the training data for the model didn’t include any CSAM?

      • xmunk@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        No, I’m not - I still have ethical objections and I don’t believe CSAM could be generated without some CSAM in the training set. I think it’s generally problematic to sexually fantasize about underage persons though I know that’s an extremely unpopular opinion here.

        • wandermind@sopuli.xyz
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          So why are you posting all over this thread about how CSAM was included in the training set if that is in your opinion ultimately irrelevant with regards to the topic of the post and discussion, the morality of using AI to generate CSAM?

          • xmunk@sh.itjust.works
            link
            fedilink
            arrow-up
            0
            ·
            2 months ago

            Because all over this thread are claims that AI CSAM doesn’t need actual CSAM to generate. We currently don’t have AI CSAM that is taint free and it’s unlikely we ever will due to how generative AI works.