

deleted by creator
deleted by creator
This is unironically a technique for catching LLM errors and also for speeding up generation.
For example in speculative decoding or mixture of experts architectures these kind of setups are used.
Fuck the kids, their piece of shit parents can pull their fucking weight if they have a problem
?!!? Before genAI it was hires human manipulators. Ypur argument doesn’t exist. We cannot call edison a witch and go back in caves because new tech creates new threat landscapes.
Humanity adapts to survive and survives to adapt. We’ll figure some shit out
People love to hate on Mozilla without knowing shit. Some of it is literally 4Chan grade manipulation as well.
Like the whole ToS debacle. People just aren’t interested in truth just rage 24/7
I know people like to hate on google, but google is actually like 3 companies in a trench coat.
They do highly valuable open source / open ecosystem work (I will say the chance of you indirectly using a google tool without knowing is over 90% now) and if the American government, a capitalist fascist government no less, gets their hands on it, we’re fucked
Not all of google is adsense or YouTube.
I’ve seen this stupidity all over lemmy. It’s like, people “group psychology” -ed this thought into the central culture of lemmy and refuse to budge.
It also doesn’t help that capitalists are using AI to take people’s jobs and also a misunderstanding of how image diffusion works had lead to the artistic line of people to also hate AI
Nobody likes to fucking listen. People like to be smug.
Oh fucking -please-
This place is genuinely more insufferable than Reddit. That is actually an achievement
Whatever dude, writhe in your own ignorance
Mozilla works mainly on LOCAL AI not this corporate trash like closedAI
I don’t fucking understand why Lemmy is permanently stuck in 2023 with AI
Using RAG (retrieval augmented generation) results in much lower, almost negligible confabulation rates
Guys PLEASE PLEASE PLEASE, download it all. The only way to preserve information is to copy it until 1 survives
I am just a poor fucking Iranian with shitass internet and no money to buy a NAS but I’ll try to hawk some part of it as much as I can
download “LM Studio” and you can download models and run them through it
I recommend something like an older Mistral model (FOSS model) for beginners, then move on to Mistral Small 24B, QwQ 32B and the likes
First, please answer, do you want everything FOSS or are you OK with a little bit of proprietary code because we can do both
Fuck ClosedAI
I want everyone here to download an inference engine (use llama.cpp) and get on open source and open data AI RIGHT NOW!
SCMP is one of the most libshit news in China
Also, chill it cracker, it’s news about a reactor, we don’t need your state dept. programming here
We got baited by piece of shit journos
It’s a local model. It doesn’t send data over.
If they want call data they can buy it straight from service providers anyway