

This is the current problem with “misalignment”. It’s a real issue, but it’s not “AI lying to prevent itself from being shut off” as a lot of articles tend to anthropomorphize it. The issue is (generally speaking) it’s trying to maximize a numerical reward by providing responses to people that they find satisfactory. A legion of tech CEOs are flogging the algorithm to do just that, and as we all know, most people don’t actually want to hear the truth. They want to hear what they want to hear.
LLMs are a poor stand in for actual AI, but they are at least proficient at the actual thing they are doing. Which leads us to things like this, https://www.youtube.com/watch?v=zKCynxiV_8I
You would hope, but this is the same thing we see across almost all industries these days. It’s almost like there’s a root cause for it, some sort of, Iunno, economic system we could blame …
But especially cable companies, for example. Has a dwindling customer base caused them to rethink their business strategies? Or has it caused them to try and bleed that dwindling base dryer even faster?
There’s no “learning” anymore, there’s riding the bus to the absolute pits of hell and just hoping you’re not the CEO to be the one that has to go down with it.