But what if my favorite coworkers didn’t actually view me as their favorite coworker? Maybe to them I was just meh. Would they come to my reunion, since they wouldn’t consider me too belong at theirs?
I mean, a certain mutuality is implied.
But what if my favorite coworkers didn’t actually view me as their favorite coworker? Maybe to them I was just meh. Would they come to my reunion, since they wouldn’t consider me too belong at theirs?
I mean, a certain mutuality is implied.
That was my take away as well. With the added bonus of having your echo chamber tailor made for you, and all the agreeing voices tuned in to your personality and saying exactly what you need to hear to maximize the effect.
It’s eery. A propaganda machine operating on maximum efficiency. Goebbels would be jealous.
>goes to sleep
>dreams of being at work
Yeah, from the article:
Even sycophancy itself has been a problem in AI for “a long time,” says Nate Sharadin, a fellow at the Center for AI Safety, since the human feedback used to fine-tune AI’s responses can encourage answers that prioritize matching a user’s beliefs instead of facts. What’s likely happening with those experiencing ecstatic visions through ChatGPT and other models, he speculates, “is that people with existing tendencies toward experiencing various psychological issues,” including what might be recognized as grandiose delusions in clinical sense, “now have an always-on, human-level conversational partner with whom to co-experience their delusions.”
Turns out AI is really good at telling people what they want to hear, and with all the personal information users voluntary provide while chatting with their bots it’s tens to maybe hundreds times much more proficient at brainwashing its subjects than any human cult leader could ever hope to be.
I have and I find it pretty convincing.