• minoscopede@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    2 hours ago

    I see a lot of misunderstandings in the comments 🫤

    This is a pretty important finding for researchers, and it’s not obvious by any means. This finding is not showing a problem with LLMs’ abilities in general. The issue they discovered is specifically for so-called “reasoning models” that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.

    Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that’s a flaw that needs to be corrected before models can actually reason.

    • theherk@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 hours ago

      Yeah these comments have the three hallmarks of Lemmy:

      • AI is just autocomplete mantras.
      • Apple is always synonymous with bad and dumb.
      • Rare pockets of really thoughtful comments.

      Thanks for being at least the latter.

    • Zacryon@feddit.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 hours ago

      Some AI researchers found it obvious as well, in terms of they’ve suspected it and had some indications. But it’s good to see more data on this to affirm this assessment.

  • Xatolos@reddthat.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 hours ago

    So, what your saying here is that the A in AI actually stands for artificial, and it’s not really intelligent and reasoning.

    Huh.

  • skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    18
    ·
    5 hours ago

    What’s hilarious/sad is the response to this article over on reddit’s “singularity” sub, in which all the top comments are people who’ve obviously never got all the way through a research paper in their lives all trashing Apple and claiming their researchers don’t understand AI or “reasoning”. It’s a weird cult.

  • RampantParanoia2365@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    3 hours ago

    Fucking obviously. Until Data’s positronic brains becomes reality, AI is not actual intelligence.

    AI is not A I. I should make that a tshirt.

  • Communist@lemmy.frozeninferno.xyz
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    5 hours ago

    I think it’s important to note (i’m not an llm I know that phrase triggers you to assume I am) that they haven’t proven this as an inherent architectural issue, which I think would be the next step to the assertion.

    do we know that they don’t and are incapable of reasoning, or do we just know that for x problems they jump to memorized solutions, is it possible to create an arrangement of weights that can genuinely reason, even if the current models don’t? That’s the big question that needs answered. It’s still possible that we just haven’t properly incentivized reason over memorization during training.

    if someone can objectively answer “no” to that, the bubble collapses.

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      ·
      5 hours ago

      I hate this analogy. As a throwaway whimsical quip it’d be fine, but it’s specious enough that I keep seeing it used earnestly by people who think that LLMs are in any way sentient or conscious, so it’s lowered my tolerance for it as a topic even if you did intend it flippantly.

    • joel_feila@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 hours ago

      Thata why ceo love them. When your job is 90% spewing bs a machine that does that is impressive

    • El Barto@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      ·
      10 hours ago

      LLMs deal with tokens. Essentially, predicting a series of bytes.

      Humans do much, much, much, much, much, much, much more than that.

    • SpaceCowboy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 hours ago

      Yeah I’ve always said the the flaw in Turing’s Imitation Game concept is that if an AI was indistinguishable from a human it wouldn’t prove it’s intelligent. Because humans are dumb as shit. Dumb enough to force one of the smartest people in the world take a ton of drugs which eventually killed him simply because he was gay.

      • jnod4@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 hours ago

        I think that person had to choose between the drugs or hard core prison of the 1950s England where being a bit odd was enough to guarantee an incredibly difficult time as they say in England, I would’ve chosen the drugs as well hoping they would fix me, too bad without testosterone you’re going to be suicidal and depressed, I’d rather choose to keep my hair than to be horny all the time

      • crunchy@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        7
        ·
        9 hours ago

        I’ve heard something along the lines of, “it’s not when computers can pass the Turing Test, it’s when they start failing it on purpose that’s the real problem.”

      • Zenith@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 hours ago

        Yeah we’re so stupid we’ve figured out advanced maths, physics, built incredible skyscrapers and the LHC, we may as individuals be less or more intelligent but humans as a whole are incredibly intelligent

  • intensely_human@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    ·
    10 hours ago

    Fair, but the same is true of me. I don’t actually “reason”; I just have a set of algorithms memorized by which I propose a pattern that seems like it might match the situation, then a different pattern by which I break the situation down into smaller components and then apply patterns to those components. I keep the process up for a while. If I find a “nasty logic error” pattern match at some point in the process, I “know” I’ve found a “flaw in the argument” or “bug in the design”.

    But there’s no from-first-principles method by which I developed all these patterns; it’s just things that have survived the test of time when other patterns have failed me.

    I don’t think people are underestimating the power of LLMs to think; I just think people are overestimating the power of humans to do anything other than language prediction and sensory pattern prediction.

    • conicalscientist@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      This whole era of AI has certainly pushed the brink to existential crisis territory. I think some are even frightened to entertain the prospect that we may not be all that much better than meat machines who on a basic level do pattern matching drawing from the sum total of individual life experience (aka the dataset).

      Higher reasoning is taught to humans. We have the capability. That’s why we spend the first quarter of our lives in education. Sometimes not all of us are able.

      I’m sure it would certainly make waves if researchers did studies based on whether dumber humans are any different than AI.

  • ZILtoid1991@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    13 hours ago

    Thank you Captain Obvious! Only those who think LLMs are like “little people in the computer” didn’t knew this already.

    • TheFriar@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 hours ago

      Yeah, well there are a ton of people literally falling into psychosis, led by LLMs. So it’s unfortunately not that many people that already knew it.

  • surph_ninja@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    ·
    15 hours ago

    You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.

      • surph_ninja@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        12 hours ago

        Funny how triggering it is for some people when anyone acknowledges humans are just evolved primates doing the same pattern matching.

        • El Barto@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          9 hours ago

          It’s not that institutionalized people don’t follow “set” pattern matches. That’s why you’re getting downvotes.

          Some of those humans can operate with the same brain rules alright. They may even be more efficient at it than you and I may. The higher level functions is a different thing.

          • surph_ninja@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 hours ago

            That’s absolutely what it is. It’s a pattern on here. Any acknowledgment of humans being animals or less than superior gets hit with pushback.

            • Auli@lemmy.ca
              link
              fedilink
              English
              arrow-up
              8
              ·
              9 hours ago

              Humans are animals. But an LLM is not an animal and has no reasoning abilities.

              • surph_ninja@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                8 hours ago

                It’s built by animals, and it reflects them. That’s impressive on its own. Doesn’t need to be exaggerated.

    • silasmariner@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 hours ago

      Some of them, sometimes. But some are adulated and free and contribute vast swathes to our culture and understanding.

    • Endmaker@ani.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 hours ago

      I still remember Geoff Hinton’s criticisms of backpropagation.

      IMO it is still remarkable what NNs managed to achieve: some form of emergent intelligence.