An AI leaderboard suggests the newest reasoning models used in chatbots are producing less accurate results because of higher hallucination rates. Experts say the problem is bigger than that
Yes, it is expensive. But most of that cost is not because of simple applications, like in my example with grammar tables. It’s because those models have been scaled up to a bazillion parameters and “trained” with a gorillabyte of scrapped data, in the hopes they’ll magically reach sentience and stop telling you to put glue on pizza. It’s because of meaning (semantics and pragmatics), not grammar.
Also, natural languages don’t really have nonsensical rules; sure, sometimes you see some weird stuff (like Italian genderbending plurals, or English question formation), but even those are procedural: “if X, do Y”. LLMs are actually rather good at regenerating those procedural rules based on examples from the data.
But I wish it had some broader use, that would justify its cost.
I with that they cut down the costs based on the current uses. Small models for specific applications, dirty cheap in both training and running costs.
(In both our cases, it’s about matching cost vs. use.)
There are many interesting AI applications, LLM or otherwise, but I’m talking about the IT bubble, that grows so big it will finally consume the industry. If it ever pops, the correction will not be pretty. For anyone.
I evaded the BS for now, but it feels like I won’t be able to hide much longer. And it saddens me. I used to love IT :(
It’s a bit fucking expensive for a grammar tool.
I get that it gets logarithmically more expensive for every last bit of grammar, and some languages have very ridiculous nonsensical rules.
But I wish it had some broader use, that would justify its cost.
Yes, it is expensive. But most of that cost is not because of simple applications, like in my example with grammar tables. It’s because those models have been scaled up to a bazillion parameters and “trained” with a gorillabyte of scrapped data, in the hopes they’ll magically reach sentience and stop telling you to put glue on pizza. It’s because of meaning (semantics and pragmatics), not grammar.
Also, natural languages don’t really have nonsensical rules; sure, sometimes you see some weird stuff (like Italian genderbending plurals, or English question formation), but even those are procedural: “if X, do Y”. LLMs are actually rather good at regenerating those procedural rules based on examples from the data.
I with that they cut down the costs based on the current uses. Small models for specific applications, dirty cheap in both training and running costs.
(In both our cases, it’s about matching cost vs. use.)
But that won’t happen, since the bubble rose on promises of gorillions of returns, and those have not manifested yet.
We are so fucking stupid, I hate this timeline.
I work in this field. In my company, we use smaller, specialized models all the time. Ignore the VC hype bubble.
There are many interesting AI applications, LLM or otherwise, but I’m talking about the IT bubble, that grows so big it will finally consume the industry. If it ever pops, the correction will not be pretty. For anyone.
I evaded the BS for now, but it feels like I won’t be able to hide much longer. And it saddens me. I used to love IT :(