A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.

I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things, too.

  • 0 Posts
  • 18 Comments
Joined 10 months ago
cake
Cake day: June 25th, 2024

help-circle

  • Wasn’t “error-free” one of the undecidable problems in maths / computer science? But I like how they also pay attention to semantics and didn’t choose a clickbaity title. Maybe I should read the paper, see how they did it and whether it’s more than an AI agent at the same intelligence level guessing whether it’s correct. I mean surprisingly enough, the current AI models usually do a good job generating syntactically correct code one-shot. My issues with AI coding usually start to arise once it gets a bit more complex. Then it often feels like poking at things and copy-pasting various stuff from StackOverflow without really knowing why it doesn’t deal with the real-world data or fails entirely.


  • I’ve also had that. And I’m not even sure whether I want to hold it against them. For some reason it’s an industry-wide effort to muddy the waters and slap open source on their products. From the largest company who chose to have “Open” in their name but oppose transparency with every fibre of their body, to Meta, the curren pioneer(?) of “open sourcing” LLMs, to the smaller underdogs who pride themselves with publishing their models that way… They’ve all homed in on the term.

    And lots of the journalists and bloggers also pick up on it. I personally think, terms should be well-defined. And open-source had a well-defined meaning. I get that it’s complicated with the transformative nature of AI, copyright… But I don’t think reproducibility is a question here at all. Of course we need that, that’s core to something being open. And I don’t even understand why the OSI claims it doesn’t exist… Didn’t we have datasets available until LLaMA1 along with an extensive scientific paper that made people able to reproduce the model? And LLMs aside, we sometimes have that with other kinds of machine learning…

    (And by the way, this is an old article, from end of october last year.)




  • Exactly. This is directly opposed to why we do AI in the first place. We want something to drive the Uber without earning a wage. Cheap factory workforce. Generate images without paying some artist $250… If we wanted that, we already have humans available, that’s how the world was for quite some time now.

    I’d say us giving AI human rights and reversing 99.9% of what it’s intended for is less likely to happen than the robot apocalypse.




  • I feel psychologists aren’t really in the loop when people make decisions about AI or most of the newer tech. Sure, they ask the right questions. And all of this is a big, unanswered question. Plus how a modern society works with loneliness, skewed perspectives by social media… But does anyone really care? Isn’t all of this shaped by some tech people in Silicon Valley and a few other places? And the only question is how to attract investor money?

    And I think people really should avoid marrying commercial services. That doesn’t end well. If you want to marry an AI, make sure it is it’s own entity and not just a cloud service.


  • Yes. Plus the turing machine has an infinite memory tape to write and read. Something that is in scope of mathematics, but we don’t have any infinite tapes in reality. That’s why we call it a mathematical model and imaginary… and it’s a useful model. But not a real machine. Whereas an abacus can actually be built. But an Abacus or a real-world “Turing machine” with a finite tape doesn’t teach us a lot about the halting problem and the important theoretical concepts. It wouldn’t be such a useful model without those imaginary definitions.

    (And I don’t really see how someone would confuse that. Knowing what models are, what we use them for, and what maths is, is kind of high-school level science education…)


  • Sure. I think you’re right. I myself want an AI maid loading the dishwasher and doing the laundry and dusting the shelves. A robot vacuum is nice, but that’s just a tiny amount of the tedious every-day chores. Plus an AI assistant on my computer, cleaning up the harddrive, sorting my gigabytes of photos…

    And I don’t think we’re there yet. It’s maybe the right amount of billions of dollars to pump into that hype if we anticipate all of this happening. But for a lame assistant that can answer questions and get the facts right 90% of the times, and whose attempts to ‘improve’ my emails are contraproductive lots of the times, isn’t really that helpful to me.

    And with that it’s just an overinflated bubble that is based on expectations, not actual usefulness or yield with the current state of technology.

    Personally, I think it’s not going to happen soon. I think it’ll take another 5-10 years of scientific advancements until we tackle issues like the limited intelligence and that it likes to make up things which aren’t really true. And we kind if need that for proper applications. I’ve tried generative AI for writing and computer coding. But I still have to spend a lot of time to fact-check and rewrite its output. As is, I think AI is limited to some specific tasks.




  • It’s a long article. But I’m not sure about the claims. Will we get more efficient computers that work like a brain? I’d say that’s scifi. Will we get artificial general intelligence? Current LLMs don’t look like they’re able to fully achieve that. And how would AI continuously learn? That’s an entirely unsolved problem at the scale of LLMs. And if we ask if computer science is science… Why compare it to engineering? I found it’s much more aligned with maths at university level…

    I’m not sure. I didn’t read the entire essay. It sounds to me like it isn’t really based on reality. But LLMs are certainly challenging our definition of intelligence.

    Edit: And are the history lessons in the text correct? Why do they say a Turing machine is a imaginary concept (which is correct), then say ENIAC became the first one, but then maybe not? Did we invent the binary computation because of reliability issues with vacuum tubes? This is the first time I read that and I highly doubt it. The entire text just looks like a fever dream to me.




  • Yeah, seeking support is notoriously difficult. Everyone working in IT knows this. I feel with open-source, it’s more the projects which aren’t in a classic Free Software domain, who attract beggars. For example the atmosphere of a Github page of a Linux tool will have a completely different atmosphere than a fancy AI tool or addon to some consumer device or service. I see a lot of spam there and demanding tone. While with a lot of more niche projects, people are patient, ask good questions and in return the devs are nice. And people use the thumbsup emoji instead of pinging everyone with a comment…

    I feel, though… I you’re part of an open source project which doesn’t welcome contributions and doesn’t want to discuss arbitrary user needs and wants, you should make that clear. I mean Free Software is kind of the default in some domains. If you don’t want that as a developer, just add a paragraph of text somewhere prominently, detailing how questions and requests are or aren’t welcome. I as a user can’t always tell if discussing my questions is a welcome thing and whether this software is supposed to cater for my needs. Unless the project tells me somehow. That also doesn’t help with the beggars… But it will help people like me not to waste everyone’s time.