I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • Toes♀@ani.social
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    Some options:

    It’s just a better Siri, still just as soulless.

    If you think they would understand the Chinese room experiment.

    Imagine the computer playing mad libs with itself and it picks the least funniest answers to present.

    Imagine if you tore every page out of every book in the library (about the things you mentioned) shuffled them and try to handout the first page that mostly makes sense to the last page given, now think about that with just letters.

    Demonstration of its capacity to make mistakes, esp continuity errors.

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    idea that “it makes convincing sentences, but it doesn’t know what it’s talking about”

    Like a teenager who has come into a new group and is now trying so hard to fit in :-)

      • Hucklebee@lemmy.worldOP
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        I commented something similair on another post, but this is exactly why I find this phenomenon so hard to describe.

        A teenager in a new group still has some understanding and has a mind. It knows many of the meaning of the words that are said. Sure, some catchphrases might be new, but general topics shouldn’t be too hard to follow.

        This is nothing like genAI. GenAI doesn’t know anything at all. It has (simplified) a list of words that somehow are connected to eachother. But AI has no meaning of a wheel, what round is, what rolling is, what rubber is, what an axle is. NO understanding. Just words that happened to describe all of it. For us humans it is so difficult to understand that something uses language without knowing ANY of the meaning.

        How can we describe this so our brains make sense that you can have language without understanding? The Chinese Room experiment comes close, but is quite complicated to explain as well I think.

        • NeoNachtwaechter@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          5 months ago

          How can we describe this so our brains make sense that you can have language without understanding?

          I think it is really impossible to describe in easy and limited words.

        • Zos_Kia@lemmynsfw.com
          link
          fedilink
          arrow-up
          0
          ·
          5 months ago

          I think a flaw in this line of reasoning is that it assigns a magical property to the concept of knowing. Do humans know anything? Or do they just infer meaning from identifying patterns in words? Ultimately this question is a spiritual question and does not hold any water in a scientific conversation.

          • bcovertigo@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            5 months ago

            It’s valid to point out that we have difficulty defining knowledge, but the output from these machines are inconsistent at a conceptual level, and you can easily get them to contradict themselves in the spirit of being helpful.

            If someone told you that a wheel can be made entirely of gas do you have confidence that they have a firm grasp of a wheel’s purpose? Tool use is a pretty widely agreed upon marker of intelligence and so not grasping the purpose of a thing that they can describe at great length and exhaustive detail, while also making boldly incorrect claims on occassion should raise an eyebrow.

  • Tar_Alcaran@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    It’s a really well-trained parrot. It responds to what you say, and then it responds to what it hears itself say.

    But despite knowing which sounds go together based on which sounds it heard, it doesn’t actually speak English.

  • BlameThePeacock@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    It’s just fancy predictive text like while texting on your phone. It guesses what the next word should be for a lot more complex topics.

    • k110111@feddit.de
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      Its like saying an OS is just a bunch of if then else statements. While it is true, in practice it is far far more complicated.

    • kambusha@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      This is the one I got from the house to get the kids to the park and then I can go to work and then I can go to work and get the rest of the day after that I can get it to you tomorrow morning to pick up the kids at the same time as well as well as well as well as well as well as well as well as well… I think my predictive text broke

  • rufus@discuss.tchncs.de
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    It’s like your 5 year old daughter, relaying to you what she made of something she heard earlier.

    That’s my analogy. ChatGPT kind of has the intellect and ability to differentiate between facts and fiction of a 5 year old. But it combines that with the writing style of a 40 year old with a uncanny love of mixing adjectives and sounding condescending.

  • Hucklebee@lemmy.worldOP
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    After reading some of the comments and pondering this question myself, I think I may have thought of a good analogy that atleast helps me (even though I know fairly well how LLM’s work)

    An LLM is like a car on the road. It can follow all the rules, like breaking in front of a red light, turning, signaling etc. However, a car has NO understanding of any of the traffic rules it follows.

    A car can even break those rules, even if its behaviour is intended (if you push the gas pedal at a red light, the car is not in the wrong because it doesn’t KNOW the rules, it just acts on it).

    Why this works for me is that when I give examples of human behaviour or animal behaviour, I automatically ascribe some sort of consciousness. An LLM has no conscious. This idea is exactly what I want to convey. If I think of a car and rules, it is obvious to me that a car has no concwpt of rules, but still is part of those rules somehow.

    • 1rre@discuss.tchncs.de
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      5 months ago

      Thing is a conscience (and any emotions, and feelings in general) is just chemicals affecting electrical signals in the brain… If a ML model such as an LLM uses parameters to affect electrical signals through its nodes then is it on us to say it can’t have a conscience, or feel happy or sad, or even pain?

      Sure the inputs and outputs are different, but when you have “real” inputs it’s possible that the training data for “weather = rain” is more downbeat than “weather = sun” so is it reasonable to say that the model gets depressed when it’s raining?

      The weightings will change leading to a a change in the electrical signals, which emulates pretty closely what happens in our heads

      • Hucklebee@lemmy.worldOP
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        5 months ago

        Doesn’t that depend on your view of consciousness and if you hold the view of naturalism?

        I thought science is starting to find more and more that a 100% naturalistic worldview is hard to keep up. (E: I’m no expert on this topic and the information and podcast I listen to are probably very biased towards my own view on this. The point I’m making is that to say “we are just neurons” is more a disputed topic for debate than actual fact when you dive a little bit into neuroscience)

        I guess my initial question is almost more philosophical in nature and less deterministic.

        • huginn@feddit.it
          link
          fedilink
          arrow-up
          0
          ·
          5 months ago

          I’m not positive I’m understanding your term naturalistic but no neuroscientist would say “we are just neurons”. Similarly no neuroscientist would deny that neurons are a fundamental part of consciousness and thought.

          You have plenty of complex chemical processes interacting with your brain constantly - the neurons there aren’t all of who you are.

          But without the neurons there: you aren’t anyone anymore. You cease to live. Destroying some of those neurons will change you fundamentally.

          There’s no disputing this.

  • skillissuer@discuss.tchncs.de
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    it’s a spicy autocomplete. it doesn’t know anything, does not understand anything, it does not reason, and it won’t stop until your boss thinks it’s good enough at your job for “restructuring” (it’s not). any illusion of knowledge comes from the fact that its source material mostly is factual. when you’re drifting off into niche topics or something that was missing out of training data entirely, spicy autocomplete does what it does best, it makes shit up. some people call this hallucination, but it’s closer to making shit up confidently while not knowing any better. humans do that too, but at least they know when they do that

    • Hucklebee@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      Hmm, now that I read this, I have a thought: it might also be hard to wrap our heads around this issue because we all talk about AI as if it is an entity. Even the sentence “it makes shit up” gives AI some kind of credit that it “thinks” about things. It doesn’t make shit up, it is doing exactly what it is programmed to do: create good sentences. It succeeds.

      Maybe the answer is just to stop talking about AI’s as “saying” things, and start talking about GenAI as “generating sentences”? That way, we emotionally distance ourselves from “it” and it’s more difficult to ascribe consciousness to an AI.

  • Ziggurat@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    have you played that game where everyone write a subjet and put it on a stack of paper, then everyone puts a verb on a different stack of paper, then everyone put an object on a third stack of paper, and you can even add a place or whatever on the next stack of paper. You end-up with fun sentences like A cat eat Kevin’s brain on the beach. It’s the kind of stuff (pre-)teen do to have a good laugh.

    Chat GPT somehow works the same way, except that instead of having 10 paper in 5 stack, it has millions of paper in thousands of stack and depending on the “context” will choose which stack it draws paper from (To take an ELI5 analogy)

    • Hucklebee@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      I think what makes it hard to wrap your head around is that sometimes, this text is emotionally charged. What I notice is that it’s especially hard if an AI “goes rogue” and starts saying sinister and malicious things. Our brain immediatly jumps to “it has bad intent” when in reality it’s jus taking some reddit posts where it happened to connect some troll messages or extremist texts.

      How can we decouple emotionally when it feels so real to us?

  • FuglyDuck@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 months ago

    It’s basically regurgitating things.

    It’s trained on an immense amount of data and that 89% of the time when someone asks the phrase “what is the answer to the ultimate question of life, the universe, everything?” It’s “42”, with an explanation that it’s a reference to Douglas Adam’s Hitchhiker’s Guide to the Galaxy

    So, when you ask that… it just replies 42, and gives a mash up of informstion mostly consistent with the pop culture reference.

    It has no idea what “42” is, whether it’s a real question or real answer, or entirely a joke. Only that’s how people in its training data responded.

    (In this example, 11% of people are either idiots who’ve never read the book- losers- or people who are making some other random quip.)

  • TheBananaKing@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    Imagine making a whole chicken out of chicken-nugget goo.

    It will look like a roast chicken. It will taste alarmingly like chicken. It absolutely will not be a roast chicken.

    The sad thing is that humans do a hell of a lot of this, a hell of a lot of the time. Look how well a highschooler who hasn’t actually read the book can churn out a book report. Flick through, soak up the flavour and texture of the thing, read the blurb on the back to see what it’s about, keep in mind the bloated over-flowery language that teachers expect, and you can bullshit your way to an A.

    Only problem is, you can’t use the results for anything productive, which is what people try to use GenAI for.

  • patatahooligan@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    Imagine you were asked to start speaking a new language, eg Chinese. Your brain happens to work quite differently to the rest of us. You have immense capabilities for memorization and computation but not much else. You can’t really learn Chinese with this kind of mind, but you have an idea that plays right into your strengths. You will listen to millions of conversations by real Chinese speakers and mimic their patterns. You make notes like “when one person says A, the most common response by the other person is B”, or “most often after someone says X, they follow it up with Y”. So you go into conversations with Chinese speakers and just perform these patterns. It’s all just sounds to you. You don’t recognize words and you can’t even tell from context what’s happening. If you do that well enough you are technically speaking Chinese but you will never have any intent or understanding behind what you say. That’s basically LLMs.

  • LainTrain@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    That analogy is hard to come up with because the question of whether it even comprehends meaning requires first answering the unanswerable question of what meaning actually is and whether or not humans are also just spicy pattern predictors / autocompletes, since predicting patterns is like the whole point of evolving intelligence, being able to connect cause and effect in patterns and anticipate the future just helps with not starving. The line is far blurrier than most are willing to admit and ultimately hinges on our experience of sapience rather than being able to strictly define knowledge and meaning.

    Instead it’s far better to say that ML models are not sentient, they are like a very big brain that’s switched off, but we can access it by stimulating it with a prompt.

    • Hucklebee@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      Interesting thoughts! Now that I think about this, we as humans have a huge advantage by having not only language, but also sight, smell, hearing and taste. An LLM basically only has “language.” We might not realize how much meaning we create through those other senses.

      • CodeInvasion@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        To add to this insight, there are many recent publications showing the dramatic improvements of adding another modality like vision to language models.

        While this is my conjecture that is loosely supported by existing research, I personally believe that multimodality is the secret to understanding human intelligence.

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Compression algorithms can reduce most written text to about 20–25% of its original size—implying that that’s the amount of actual unique information it contains, while the rest is filler.

    Empirical studies have found that chimps and human infants, when looking at test patterns, will ignore patterns that are too predictable or too unpredictable—with the sweet spot for maximizing attention being patterns that are about 80% predictable.

    AI programmers have found that generating new text by predicting the most likely continuation of the given input results in text that sounds boring and robotic. Through trial and error, they found that, instead of choosing the most likely result, choosing one with around an 80% likelihood threshold produces results judged most interesting and human-like.

    The point being: AI has stumbled on a method of mimicking meaning by imitating the ratio of novelty to predictability that characterizes real human thought. But it doesn’t fillow that the source of that novelty is anything that actually resembles human cognition.