

Hey there decentralized digital currency systems, you wanna… centralize?
You seem to spend a lot of energy questioning people’s intentions, inventing reasons to question whether people’s intentions toward you are genuine. Some do deserve to be questioned, no doubt. It just seems draining, and for what goal?
Do you aim to be the sole determiner of truth? To never be duped again? To sharpen your skills as an investigator?
How much more creative energy could you put into the world by taking people at their word in all but the highest risk cases?
What weighs more: the cost of taking people at their word, or the effort it takes to interpret the subtext of every interaction?
What would Altman gain from overstating the environmental impact of his own company?
What if power consumption is not so much limited by the software’s appetite, but rather by the hardware’s capabilities?
…the Voyager 1 team would like a word.
Min Reqs: 233 MHz Processor, 64MB RAM, 1.5 GB Storage… it could probably run on your car key fob.
I believe it would be darker than mere loopholes. Corporations would probably both protect their own IP and steal each others’ IP through militant means. Like, Cyberpunk 2077 could become reality.
XOR cleartext once with a key you get ciphertext. XOR the ciphertext with the same key you get the original cleartext. At its core this is the way the old DES cipher works.
A bit of useful trivia: If you XOR any number with itself, you get all zeros. You can see this in practice when an assembly programmer XOR’s a register with itself to clear it out.
Maybe you’re right. Maybe it’s Markov chains all the way down.
The only way I can think to test this would be to “poison” the training data with faulty arithmetic to see if it is just recalling precedent or actually implementing an algorithm.
This reminds me of learning a shortcut in math class but also knowing that the lesson didn’t cover that particular method. So, I use the shortcut to get the answer on a multiple choice question, but I use method from the lesson when asked to show my work. (e.g. Pascal’s Pyramid vs Binomial Expansion).
It might not seem like a shortcut for us, but something about this LLM’s training makes it easier to use heuristics. That’s actually a pretty big deal for a machine to choose fuzzy logic over algorithms when it knows that the teacher wants it to use the algorithm.
Yeah, but this reminds me of a line from game of thrones:
“If you’re a famous smuggler, you’re doing it wrong.”
(Pixel Only) https://grapheneos.org/
(Stripped down Android, Pixel Only) https://pixelbuilds.org/