• 0 Posts
  • 5 Comments
Joined 2 years ago
cake
Cake day: June 22nd, 2023

help-circle

  • Bruh, I get how you feel, but your complaints are with capitalism, not algorithms that are wildly better than previous ones at fuzzy pattern matching.

    Here is an example of how AI has already literally revolutionized science through one targeted project:

    https://m.youtube.com/watch?v=P_fHJIYENdI

    This work won the Nobel prize in chemistry.

    And my best friend literally did his PhD in protein crystallography, is at MIT doing a protein structural analysis Post Doc, and the work of the new AI based protein structural predictions has literally completely changed the direction of their lab’s research, basically overnight.

    Because, yes AI algorithms literally are able to solve a new class of problems. It’s literally what this old pre-LLM xkcd is talking about: https://xkcd.com/1425/ and while it’s asking for confirmation of a ‘bird’, identifying photos of say, cancer, is the literal exact same problem from an algorithm standpoint, and is a huge amount of other fuzzy pattern matching problems.

    Yeah there’s a lot of dumb tech bros over hyping AI, and a lot of giant corporations that care about using it for literally nothing but getting personally richer, but you’re going to be misinformed the other direction about its genuine usefulness if you just read nothing but AI doomer blogs from people who don’t actually bother trying to use or understand the technology.


  • The AI technofacists building these systems have explicitly said they’ve hit a wall. They’re having to invest in their own power plants just to run these models. They have scores of racks of GPUs, so they’re dependent upon the silicon market. AI isn’t becoming “ever more capable,” it’s merely pushing the limits of what they have left.

    While I agree that this paper sounds like a freshman thesis, I think you’re betraying your own lack of knowledge here.

    Because no, they havent said they’ve hit a wall, and while there are reasons to be skeptical of the brute force scaling approach that a lot of companies are taking, those companies are doing that because they have massive amounts of capital and scaling is an easy way to spend capital to improve the results of your model while your researchers figure out how to make better models, leaving you in a better market position when the next breakthrough or advancement happens.

    The reasoning models of today like o1 and Claude 3.7 are substantially more capable than the faster models that predate them, and while you can make an argument that the resource / speed trade off isn’t worth it, they’re also the very first generation of models that are trying to integrate LLMs into a more logical reasoning framework.

    This is on top of the broader usage of AI that is rapidly becoming more capable. The fuzzy pattern matching techniques that LLMs use have literally already revolutionized fields like Protein Structural Analysis, all the result of a single targeted DeepMind project.

    The techniques behind AI allow computers to solve whole new classes of problems that werent possible before, dismissing that is just putting your head in the sand.

    And yes companies are still dependent on silicon and energy, which is why they’re vertically integrating and starting to try and produce that on their own. That’s not a sign that they see AI as a waste of time.