

How many others are like this??
Far too many: more than zero.
How many others are like this??
Far too many: more than zero.
It’s easier to up-sell and cross-sell if you’re talking to an AI.
I work in an environment where we’re dealing with high volumes of data, but not like a few meg each for millions of users. More like a few hundred TB fed into multiple pipelines for different kinds of analysis and reduction.
There’s a shit-ton of prior art for how to scale up relatively simple web apps to support mass adoption. But there’s next to nothing about how do to what we do, because hardly anyone does. So look ma, no training set!
If it walks and quacks like a speculative bubble…
I’m working in an organization that has been exploring LLMs for quite a while now, and at least on the surface, it looks like we might have some use cases where AI could prove useful. But so far, in terms of concrete results, we’ve gotten bupkis.
And most firms I’ve encountered don’t even have potential uses, they’re just doing buzzword engineering. I’d say it’s more like the “put blockchain into everything” fad than like outsourcing, which was a bad idea for entirely different reasons.
I’m not saying AI will never have uses. But as it’s currently implemented, I’ve seen no use of it that makes a compelling business case.
Now your smart fridge can propose unpalatable recipes. Woo fucking hoo.
Also Epstein got a lot of cover from non-criminal association with some rich and powerful people. Not everyone who rode on his plane was a nonce.
On the other hand, Trump was very closely associated with Epstein for an extended period. That’s not the same as someone glitzing up Epstein’s guest list in support of a charity fundraiser.
Leak the subscribers’ details.
They are.
Their input sides are based on crawling, just as search is.
the numbers Trump posted are questionable at best
I’m less diplomatic: the numbers that Trump posted are flagrant bullshit.
Very possible that we’ve got an AI that’s built up a backlog of Harvard Business Studies and CalTech economics models to reach the ideal hypothetical tariff regime.
Possible but vastly improbable. And since when has CalTech been into econometric modeling? Last time I checked, they only did engineering and real science.
what’s insane to me is that the math adds up
Too bad it’s based on wrong assumptions. It’s not the arithmetic that’s the issue, it’s the model.
some economics undergrad could come up with the same thing
And if they did it on a test, they’d flunk it.
Understanding the underlying methodology shows how it completely lacks nuance or understanding of how the world really works.
Yeah, it fails to understand the rationale for comparative advantage (there’s a reason Ecuador exports more bananas than Norway does), and it also fails to consider the balance-of-payments effect of things like foreign direct investment (which looks zero-sum when it first takes place but means the profits are outflows from that point on, unless the foreign investors choose to reinvest them).
Also I don’t think the idiots who came up with that table know the difference between a current account balance and balance of trade.
And that’s not lettuce, it’s horseshit.
Probably one of Musk’s little goons was given the task, and they immediately went to ChatGPT.
NIH staff could encrypt that data and then ask third-party volunteer orgs to archive it.
Yeah, one of my family members is a bricklayer and he can work out a bill of materials in his head based on the dimensions in an architectural plan: given these dimensions and this thickness of mortar joint, I’ll need this many bricks, this many bags of mortar, this many bags of sand, this many hours of labor, etc. It’s just addition and multiplication, but his colleagues regard him as a freak. And when he first started doing it, if you’d ask him to break down his reasoning, he’d find that difficult.
Memory can improve with training, and it’s useful in a large number of contexts. My major beef with rote memorization in schools is that it’s usually made to be excruciatingly boring. I’d say that’s the bigger problem.
Ever since learning about aphantasia I’m wondering if the lack of being able to visually store values has something to do with it.
Here’s some anecdotal evidence. Until I was 12 or 13, I could do absurdly complex arithmetical calculations in my head. My memory of it was of visualizing intermediate calculations as if they were on a screen in my head. I’d close my eyes to minimize distracting external stimuli. I’d get pocket money because my dad would get his friends to bet on whether I could correctly multiply two 7-digit phone numbers, and when I won, which I always did, he’d give the money to me. He had an old-school electromechanical calculator he’d use to check the results.
Neither of my parents and none of my many siblings had this ability.
I was able to use a similar visualization technique to memorize long passages of music and text. That stayed with me post-puberty, though again at a lesser extent. I’ve also been able to learn languages more quickly than most.
Once puberty kicked in, my ability to visualize declined significantly, though to compensate, I learned some mental arithmetics tricks that I still use now. I was able to get an MS in mathematics without much effort, since that relied on higher-level reasoning and not all that much on powerful memory or visualization. I didn’t pursue a Ph.D. due to lack of money but I think I could have gotten one (though I despise academic politics).
So I think your comment about aphantasia is at least directionally correct, at least as applied to people. But there’s little reason to assume LLMs would do things the same way a human mind does, though both might operate under some similar information-theoretic constraints that would cause convergent evolution.
I love to wish it on my worst enemies.
Yeah, yeah, omelettes…eggs… heard it all before.