- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Advances in AI are making us reconsider what intelligence is and giving us clues to unlocking AI’s full potential.
No it isn’t. No they aren’t.
Couldn’t have said it better.
At least 1% of the money being poured into “AI research” nowadays seems to be spent on spewing these breathless puff pieces everywhere. The other 99% is spent on datacenter costs, probably. I am so excited for the day this bubble will finally pop. Just imagine the firesales on GPUs and rack space. It’ll be glorious.
Those datacenter GPU won’t be for sale. Those will be destroyed so corporate can write them off from tax. You will pay for them twice.
So many of the GPUs have been crippled for the purposes of gaming and there is zero incentive for Nvidia to produce drivers for those cards to do anything else. Alas they will just end up in landfill.
It’s a long article. But I’m not sure about the claims. Will we get more efficient computers that work like a brain? I’d say that’s scifi. Will we get artificial general intelligence? Current LLMs don’t look like they’re able to fully achieve that. And how would AI continuously learn? That’s an entirely unsolved problem at the scale of LLMs. And if we ask if computer science is science… Why compare it to engineering? I found it’s much more aligned with maths at university level…
I’m not sure. I didn’t read the entire essay. It sounds to me like it isn’t really based on reality. But LLMs are certainly challenging our definition of intelligence.
Edit: And are the history lessons in the text correct? Why do they say a Turing machine is a imaginary concept (which is correct), then say ENIAC became the first one, but then maybe not? Did we invent the binary computation because of reliability issues with vacuum tubes? This is the first time I read that and I highly doubt it. The entire text just looks like a fever dream to me.
Why do they say a Turing machine is a imaginary concept (which is correct), then say ENIAC became the first one, but then maybe not?
Thanks for pointing out this hilarious section. A Turing machine is an “imaginary concept” just like any of mathematics. An abacus is also an “imaginary concept”. But people can still make them (at least finite versions).
When they start talking about “imaginary concepts”, it’s pretty clear that the author has no understanding about the relationship between math, science, engineering, etc. That lack of understanding is a necessary prerequisite for writing this kind of article.
Yes. Plus the turing machine has an infinite memory tape to write and read. Something that is in scope of mathematics, but we don’t have any infinite tapes in reality. That’s why we call it a mathematical model and imaginary… and it’s a useful model. But not a real machine. Whereas an abacus can actually be built. But an Abacus or a real-world “Turing machine” with a finite tape doesn’t teach us a lot about the halting problem and the important theoretical concepts. It wouldn’t be such a useful model without those imaginary definitions.
(And I don’t really see how someone would confuse that. Knowing what models are, what we use them for, and what maths is, is kind of high-school level science education…)
Insufferable headline.
are compelling us to rethink our understanding of what intelligence truly is.
Oh, there was already a generally agreed-on understanding of it?
Dropped.