• Phanatik@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    The reason GPT is different from those examples (not all of them but I’m not going into that), is that the malicious action is on the part of the user. With GPT, it gives you an output that it has plagiarised. The user can take that output and then submit it as their own which is further plagiarism but that doesn’t absolve GPT. The problem is that GPT doesn’t cite its own sources which would be very helpful in understanding the information it’s getting and with fact-checking it.

    • Heratiki@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      While GPT was trained on the material it does not produce plagiarizing results. It can have reused phrases but only because those phrases are reused across multiple examples and not from a specific work. It learns like b comes after a, c comes after b, d comes after c and then will sometimes reproduce ABCD because it’s normal for that to be used within the context. It is not plagiarism but more akin to the human capability of guiltless probability. If it’s plagiarizing then it’s doing so by coincidence due to context.

      • Phanatik@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        How it goes about constructing sentences doesn’t mean the phrases it reproduces aren’t plagiarism. Plagiarism doesn’t care about probability of occurrence, it looks at how much one work closely resembles another and the more similar they are, the more likely it is to be plagiarised.

        You can only escape plagiarism by proving that you didn’t copy intentionally or you cite your sources.

        GPT has no defence because it has to learn from the sources in order to learn the probabilities of the phrases being constructed together. It also doesn’t cite its sources so in my eyes, if found to be plagiarising then it has no defence.