

I skimmed the paper, and it seems pretty cool. I’m not sure I quite follow the “diffusion model-based architecture” it mentioned, but it sounds interesting
I skimmed the paper, and it seems pretty cool. I’m not sure I quite follow the “diffusion model-based architecture” it mentioned, but it sounds interesting
I’m not talking about the specifics of the architecture.
To the layman, AI refers to a range of general purpose language models that are trained on “public” data and possibly enriched with domain-specific datasets.
There’s a significant material difference between using that kind of probabilistic language completion and a model that directly predicts the results of complex processes (like what’s likely being discussed in the article).
It’s not specific to the article in question, but it is really important for people to not conflate these approaches.
There really needs to be a rhetorical distinction between regular machine learning and something like an llm.
I think people read this (or just the headline) and assume this is just asking grok “what interactions will my new drug flavocane have?” Where these are likely large models built on the mountains of data we have from existing drug trials
That’s true, but all of their problem with docker are that it’s Linux
docker is a layer to run on top of Linux’s KVM
My understanding is that this is only true for docker desktop, which there’s not really any reason to use on a server.
Sure, since containers use the host’s kernel, any Linux containers do need to either have Linux as the host or run a VM (as docker desktop does by default), but that’s not particularly unique to docker
Is it really vendor lock-in if you can fork it at your whim?
Try rereading the whole tweet, it’s not very long. It’s specifically saying that they plan to “correct” the dataset using Grok, then retrain with that dataset.
It would be way too expensive to go through it by hand
?
Reproducibility of what we call LLM 's as opposed to what we call other forms of machine learning?
Or are you responding to my assertion that these are different enough to warrant different language with a counterexample of one way in which they are similar?