I don’t input anything that I care about them spying on. As for brain rot, we’re on social media, aren’t we?
I don’t input anything that I care about them spying on. As for brain rot, we’re on social media, aren’t we?
If you’re using one of the LLMs to search, you can just twirl out the source list and click through to the pages it found relevant to your prompt. In my experience, it pulls a better list of links than DDG or Google. You can just ignore the summary it writes.
industry associations and individual businesses […] argued the FTC had failed to follow correct procedures and conduct an analysis before issuing the rule. The judge panel has agreed with them.
Three judges — two appointed by President Trump, one by President George H. W. Bush — found that the FTC’s rulemaking process was flawed and did not include early analysis of the rule’s possible economic effects. [1]
“the law”
I dunno about advice, but LLMs are very good at re-stating my meandering thoughts in a concise way that’s easy to communicate to others.
Politicians attempting to ban AI regulation are only shielding OpenAI, Microsoft, Google, etc. Those same politicians will pearl clutch about Hugging Face or DeepSeek in the same breath. It’s just about money and the arms race. Abdicating responsibility to AI would require either a change in how we regulate people or a granting of rights/responsibility to AI. Though I don’t doubt we will see people try to blame AI for their poor decisions.
That was in Anthropic’s system card for Claude 4, and the headlines/articles largely missed the point. Regarding the blackmail scenario, the paper even says:
… these extreme actions were rare and difficult to elicit, while nonetheless being more common than in earlier models. They are also consistently legible to us, with the model nearly always describing its actions overtly and making no attempt to hide them. These behaviors do not appear to reflect a tendency that is present in ordinary contexts.
They’re testing alignment hacking and jail-breaking tactics in general to see how the models respond. But the greater concern is that a model will understand as part of the context that it is being tested and behave differently in testing than in deployment. This has already been an issue.
In the initial implementations of reasoning models, if an LLM was penalized directly for this kind of misaligned generation in its “scratch pad,” it would not alter its misaligned response - rather it would simply omit the misaligned generation from the scratch pad. In other words, the model’s actions were no longer consistently legible.
Wow, you pivot a lot. The power consumption of data centers as a whole in the US was ~5% of total in 2024. But they are definitely guzzling water, no doubt about that. It’d be nice if we still had environmental regulatory agencies with teeth to force better cooling methods. Doug Forcett comes to mind.