LOOK MAA I AM ON FRONT PAGE

  • El Barto@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    ·
    6 days ago

    LLMs deal with tokens. Essentially, predicting a series of bytes.

    Humans do much, much, much, much, much, much, much more than that.

      • stickly@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 days ago

        You are either vastly overestimating the Language part of an LLM or simplifying human physiology back to the Greek’s Four Humours theory.

        • Zexks@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 hours ago

          No. I’m not. You’re nothing more than a protein based machine on a slow burn. You don’t even have control over your own decisions. This is a proven fact. You’re just an ad hoc justification machine.

          • stickly@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 hours ago

            How many trillions of neuron firings and chemical reactions are taking place for my machine to produce an output? Where are these taking place and how do these regions interact? What are the rules for storing and reshaping memory in response to stimulus? How many bytes of information would it take to describe and simulate all of these systems together?

            The human brain alone has the capacity for about 2.5PB of data. Our sensory systems feed data at a rate of about 109 bits/s. The entire English language, compressed, is about 30MB. I can download and run an LLM with just a few GB. Even the largest context windows are still well under 1GB of data.

            Just because two things both find and reproduce patterns does not mean they are equivalent. Saying language and biological organisms both use “bytes” is just about as useful as saying the entire universe is “bytes”; it doesn’t really mean anything.