I see a lot of misunderstandings in the comments 🫤
This is a pretty important finding for researchers, and it’s not obvious by any means. This finding is not showing a problem with LLMs’ abilities in general. The issue they discovered is specifically for so-called “reasoning models” that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.
Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that’s a flaw that needs to be corrected before models can actually reason.
Yeah these comments have the three hallmarks of Lemmy:
- AI is just autocomplete mantras.
- Apple is always synonymous with bad and dumb.
- Rare pockets of really thoughtful comments.
Thanks for being at least the latter.
Some AI researchers found it obvious as well, in terms of they’ve suspected it and had some indications. But it’s good to see more data on this to affirm this assessment.
deleted by creator
So, what your saying here is that the A in AI actually stands for artificial, and it’s not really intelligent and reasoning.
Huh.
NOOOOOOOOO
SHIIIIIIIIIITT
SHEEERRRLOOOOOOCK
Extept for Siri, right? Lol
Apple Intelligence
What’s hilarious/sad is the response to this article over on reddit’s “singularity” sub, in which all the top comments are people who’ve obviously never got all the way through a research paper in their lives all trashing Apple and claiming their researchers don’t understand AI or “reasoning”. It’s a weird cult.
Fucking obviously. Until Data’s positronic brains becomes reality, AI is not actual intelligence.
AI is not A I. I should make that a tshirt.
It’s an expensive carbon spewing parrot.
It’s a very resource intensive autocomplete
I think it’s important to note (i’m not an llm I know that phrase triggers you to assume I am) that they haven’t proven this as an inherent architectural issue, which I think would be the next step to the assertion.
do we know that they don’t and are incapable of reasoning, or do we just know that for x problems they jump to memorized solutions, is it possible to create an arrangement of weights that can genuinely reason, even if the current models don’t? That’s the big question that needs answered. It’s still possible that we just haven’t properly incentivized reason over memorization during training.
if someone can objectively answer “no” to that, the bubble collapses.
No shit. This isn’t new.
Most humans don’t reason. They just parrot shit too. The design is very human.
I hate this analogy. As a throwaway whimsical quip it’d be fine, but it’s specious enough that I keep seeing it used earnestly by people who think that LLMs are in any way sentient or conscious, so it’s lowered my tolerance for it as a topic even if you did intend it flippantly.
Thata why ceo love them. When your job is 90% spewing bs a machine that does that is impressive
LLMs deal with tokens. Essentially, predicting a series of bytes.
Humans do much, much, much, much, much, much, much more than that.
Yeah I’ve always said the the flaw in Turing’s Imitation Game concept is that if an AI was indistinguishable from a human it wouldn’t prove it’s intelligent. Because humans are dumb as shit. Dumb enough to force one of the smartest people in the world take a ton of drugs which eventually killed him simply because he was gay.
I think that person had to choose between the drugs or hard core prison of the 1950s England where being a bit odd was enough to guarantee an incredibly difficult time as they say in England, I would’ve chosen the drugs as well hoping they would fix me, too bad without testosterone you’re going to be suicidal and depressed, I’d rather choose to keep my hair than to be horny all the time
I’ve heard something along the lines of, “it’s not when computers can pass the Turing Test, it’s when they start failing it on purpose that’s the real problem.”
Yeah we’re so stupid we’ve figured out advanced maths, physics, built incredible skyscrapers and the LHC, we may as individuals be less or more intelligent but humans as a whole are incredibly intelligent
No way!
Statistical Language models don’t reason?
But OpenAI, robots taking over!
Fair, but the same is true of me. I don’t actually “reason”; I just have a set of algorithms memorized by which I propose a pattern that seems like it might match the situation, then a different pattern by which I break the situation down into smaller components and then apply patterns to those components. I keep the process up for a while. If I find a “nasty logic error” pattern match at some point in the process, I “know” I’ve found a “flaw in the argument” or “bug in the design”.
But there’s no from-first-principles method by which I developed all these patterns; it’s just things that have survived the test of time when other patterns have failed me.
I don’t think people are underestimating the power of LLMs to think; I just think people are overestimating the power of humans to do anything other than language prediction and sensory pattern prediction.
This whole era of AI has certainly pushed the brink to existential crisis territory. I think some are even frightened to entertain the prospect that we may not be all that much better than meat machines who on a basic level do pattern matching drawing from the sum total of individual life experience (aka the dataset).
Higher reasoning is taught to humans. We have the capability. That’s why we spend the first quarter of our lives in education. Sometimes not all of us are able.
I’m sure it would certainly make waves if researchers did studies based on whether dumber humans are any different than AI.
You either an llm, or don’t know how your brain works.
Thank you Captain Obvious! Only those who think LLMs are like “little people in the computer” didn’t knew this already.
Yeah, well there are a ton of people literally falling into psychosis, led by LLMs. So it’s unfortunately not that many people that already knew it.
Dude they made chat gpt a little more boit licky and now many people are convinced they are literal messiahs. All it took for them was a chat bot and a few hours of talk.
You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.
Maybe you failed all your high school classes, but that ain’t got none to do with me.
Funny how triggering it is for some people when anyone acknowledges humans are just evolved primates doing the same pattern matching.
It’s not that institutionalized people don’t follow “set” pattern matches. That’s why you’re getting downvotes.
Some of those humans can operate with the same brain rules alright. They may even be more efficient at it than you and I may. The higher level functions is a different thing.
That’s absolutely what it is. It’s a pattern on here. Any acknowledgment of humans being animals or less than superior gets hit with pushback.
Humans are animals. But an LLM is not an animal and has no reasoning abilities.
It’s built by animals, and it reflects them. That’s impressive on its own. Doesn’t need to be exaggerated.
I appreciate your telling the truth. No downvotes from me. See you at the loony bin, amigo.
We also reward people who can memorize and regurgitate even if they don’t understand what they are doing.
Some of them, sometimes. But some are adulated and free and contribute vast swathes to our culture and understanding.
No shit
Of course, that is obvious to all having basic knowledge of neural networks, no?
I still remember Geoff Hinton’s criticisms of backpropagation.
IMO it is still remarkable what NNs managed to achieve: some form of emergent intelligence.