An AI leaderboard suggests the newest reasoning models used in chatbots are producing less accurate results because of higher hallucination rates. Experts say the problem is bigger than that
Most of us have no use for quantum computers. That’s a government/research thing. I have no idea what the next disruptive technology will be. They are working hard on AGI, which has the potential to be genuinely disruptive and world changing, but LLMs are not the path to get there and I have no idea whether they are anywhere close to achieving it.
Most of us have no use for quantum computers. That’s a government/research thing. I have no idea what the next disruptive technology will be. They are working hard on AGI, which has the potential to be genuinely disruptive and world changing, but LLMs are not the path to get there and I have no idea whether they are anywhere close to achieving it.
Surprise surprise, most of us have no use for LLMs.
And yet everyone and their gradma is using it for everything.
People asked GPT who would the next pope be.
Or which car to buy.
Or what’s a good local salary.
I’m so fucking tired of all the shit.