OpenAI has publicly responded to a copyright lawsuit by The New York Times, calling the case “without merit” and saying it still hoped for a partnership with the media outlet.

In a blog post, OpenAI said the Times “is not telling the full story.” It took particular issue with claims that its ChatGPT AI tool reproduced Times stories verbatim, arguing that the Times had manipulated prompts to include regurgitated excerpts of articles. “Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts,” OpenAI said.

OpenAI claims it’s attempted to reduce regurgitation from its large language models and that the Times refused to share examples of this reproduction before filing the lawsuit. It said the verbatim examples “appear to be from year-old articles that have proliferated on multiple third-party websites.” The company did admit that it took down a ChatGPT feature, called Browse, that unintentionally reproduced content.

  • LWD@lemm.ee
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    20
    ·
    6 months ago

    LLMs cannot learn or create like humans, and even if they somehow could, they are not humans. So the comparison to human creators expounding upon a genre is false because the premises on which it is based are false.

    Perhaps you could compare it to a student getting blackout drunk, copying Wikipedia articles and pasting them together, using a thesaurus app to change a few words here and there… And in the end, the student doesn’t know what they created, has no recollection of the sources they used, and the teacher can’t detect whether it’s plagiarized or who from.

    OpenAI made a mistake by taking data without consent, not just from big companies but from individuals who are too small to fight back. Regurgitating information without attribution is gross in every regard, because even if you don’t believe in asking for consent before taking from someone else, you should probably ask for a source before using this regurgitated information.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      6
      ·
      6 months ago

      Well, machine learning algorithms do learn, it’s not just copy paste and a thesaurus. It’s not exactly the same as people, but arguing that it’s entirely different is also wrong.
      It isn’t a big database full of copy written text.

      The argument is that it’s not wrong to look at data that was made publicly available when you’re not making a copy of the data.
      It’s not copyright infringement to navigate to a webpage in your browser, even though that makes your computer download it, process all of the contents of the page, render the content to the screen and hold onto that download for a finite but indefinite period of time, while you perform whatever operations you like on the downloaded data.
      You can even take notes on the data and keep those indefinitely, including using that derivative information to create your own similar works.
      The NYT explicitly publishes articles in a format designed to be downloaded, processed and have information extracted from that download by a computer program, and then to have that processed information presented to a human. They just didn’t expect that the processing would end up looking like this.

      The argument doesn’t require that we accept that a human and a computers system for learning be held to the same standard, or that we can’t differentiate between the two, it hinges on the claim that this is just an extension of what we already find it reasonable for a computer to do.
      We could certainly hold that generative AI is a different and new category for copyright law, but that’s very different from saying that their actions are unacceptable under current law.

      • LWD@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        6 months ago

        Their actions are unacceptable, whether it fits under the technicality of legality or not. Just like when the BBC intentionally plagiarized the work of Brian Deer, except at least in his case they had the foresight to try asking first, and not just to assume he consented because of the way the data looked.

        The NYT explicitly publishes articles in a format designed to be downloaded, processed and have information extracted from that download by a computer program, and then to have that processed information presented to a human.

        Speaking of overutilizing a thesaurus, you buried the lede: The text is designed for a human to read.

        I don’t like the “just look at it, it was asking for it” defense because that abuses publishers who try to present things in a DRM free fashion for their readers:

        “Our authors and readers have been asking for this for a long time,” president and publisher Tom Doherty explained at the time. “They’re a technically sophisticated bunch, and DRM is a constant annoyance to them. It prevents them from using legitimately-purchased e-books in perfectly legal ways, like moving them from one kind of e-reader to another.”

        But DRM-free e-books that circulate online are easy for scrapers to ingest.

        The SFWA submission suggests “Authors who have made their work available in forms free of restrictive technology such as DRM for the benefit of their readers may have especially been taken advantage of.”

        • ricecake@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          Have you deleted and reposted this comment three times now, or is something deeply wrong with your client?

        • ricecake@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          6 months ago

          I don’t think it’s a question of saying they’re “asking for it”, that just feels like trying to attach an emotionally charged crime to a civil copyright question.
          The technology was designed to transmit the data to a computer for ephemeral processing, and that’s how it’s being used.
          It was intended to be used for human consumption, but their intent has little to do with if what was done was it was fair.
          If you give something away with the hopes people will pay for more, and instead people take what you gave them under the exact terms you specified, it’s not fair to sue them.

          The NYT is perfectly content to have their content used for algorithmic consumption in other cases where people want a consistently formatted, grammatically correct source of information about current events.

          The question of if it’s okay or not is one that society is still working out. Personally, I don’t see a problem with it. If it’s available to anyone, they can do what they want with it. If you want to control access to it, you need to actually do that by putting up a login or in some way getting people to agree to those stipulations.

          Speaking of overutilizing a thesaurus

          I’m sorry some of my words were too big for you.