

Huh? What do you mean “if”? Such a PDF vulnerability literally did happen a few months ago; fixed in Firefox v.126: https://codeanlabs.com/blog/research/cve-2024-4367-arbitrary-js-execution-in-pdf-js/.
Huh? What do you mean “if”? Such a PDF vulnerability literally did happen a few months ago; fixed in Firefox v.126: https://codeanlabs.com/blog/research/cve-2024-4367-arbitrary-js-execution-in-pdf-js/.
There’s no real need for pirate ai when better free alternatives exist.
There’s plenty of open-source models, but they very much aren’t better, I’m afraid to say. Even if you have a powerful workstation GPU and can afford to run the serious 70B opensource models at low quantization, you’ll still get results significantly worse than the cutting-edge cloud models. Both because the most advanced models are proprietary, and because they are big and would require hundreds of gigabytes of VRAM to run, which you can trivially rent from a cloud service but can’t easily get in your own PC.
The same goes for image generation - compare results from proprietary services like midjourney to the ones you can get with local models like SD3.5. I’ve seen some clever hacks in image generation workflows - for example, using image segmentation to detect a generated image’s face and hands and then a secondary model to do a second pass over these regions to make sure they are fine. But AFAIK, these are hacks that modern proprietary models don’t need, because they have gotten over those problems and just do faces and hands correctly the first time.
This isn’t to say that running transformers locally is always a bad idea; you can get great results this way - but people saying it’s better than the nonfree ones is mostly cope.
Incredibly weird that this thread was up for two days without anyone posting a link to the actual answer to OP’s question, which is g4f.
I haven’t, actually, since I normally use an adblocker (and also don’t use that tracker). Looks like they’re all VPN advertisements right now, which is at least a somewhat non-mainstream ad segment.
Accounts are already mostly portable (you can easily export all your settings and import into your new account), you just don’t retain posting history.
To retain that… I guess there could be a separate fediverse service that does nothing but allow registering accounts that let you prove that other fediverse accounts all belong to the same person, and then a PR can be made to Lemmy and the other platforms to honor these links when showing posting history. It’d be quite a messy system.
The answer is obvious: we must forever be completely advertiser-unfriendly and absolutely unmarketable. With every piece of porn, every post on digital piracy, every swearword, we do our part to protect the fediverse’s independence.
From what I know Element is a safer bet (similarly encrypted, but also decentralized), but Signal is the best one out of the messengers that don’t require any technical knowledge.
You should explain what “stuff” is “coming out”, then, instead of vagueposting.
Note that openai’s original whisper models are pretty slow; in my experience the distil-whisper project (via a tool like whisperx) is more than 10x faster.
Really? This is the opposite of my experience with (distil-)whisper - I use it to generate subtitles for stuff like podcasts and was stunned at first by how high-quality the results are. I typically use distil-whisper/distil-large-v3, locally. Was it among the models you tried?
How’s musk related to this one?
My point is just that nobody really thinks it should be a free for all.
Don’t made judgements about everybody based on one guy. I’m on an instance that doesn’t defederate lemmygrad or lemmy.ml, so I commonly see utterly insane tankie takes in popular, and of course also in various comments - and yet I don’t want those people to not have a platform. Because I trust just about noone to decide whether my opinions should be censored, and if that means also not censoring the opinions of people who I think are very wrong, I’m willing to take that trade.
I see. No, I don’t think I have any specific questions at this point.
Is there some feature comparison of lemmy vs mbin vs other reddit-like platforms? There was some major reason why I didn’t like kbin, but I forgot why.
What’s so hilarious about it?
I’m very happy Servo exists but if they want, like, a working browser, it’s no wonder they chose Chromium.
For comparison, from a recent Servo blogpost: “Servo can now run Discord well enough to log in and read messages, though you can’t send messages yet. […] We now support enough of XPath to get htmx working.”.
Servo has been in development for 7+ years and it’s still not able to render modern web. Maybe it never will, since it’s impossible to build a new web browser.
I use Firefox (and forks) myself but wouldn’t donate to it. It’s like Wikipedia - a great project with a shitty parent company which’ll spend all of your donations on shit projects.
Every time there’s an AI hype cycle the charlatans start accusing the naysayers of moving goalposts. Heck that exact same thing was happing constantly during the Watson hype. Remember that? Or before that the Alpha Go hype. Remember that?
Not really. As far as I can see the goalpost moving is just objectively happening.
But fundamentally you can’t make a machine think without understanding thought.
If “think” means anything coherent at all, then this is a factual claim. So what do you mean by it, then? Specifically: what event would have to happen for you to decide “oh shit, I was wrong, they sure did make a machine that could think”?
The fact that you don’t understand it doesn’t mean that nobody does.
I would say I do. It’s not that high of a bar - one only needs some nandgame to understand how logic gates can be combined to do arithmetic. Understanding how doped silicon can be used to make a logic gate is harder but I’ve done a course on semiconductor physics and have an idea of how a field effect transistor works.
The way a calculator calculates is something that is very well understood by the people who designed it.
That’s exactly my point, though. If you zoom in deeper, a calculator’s microprocessor is itself composed of simpler and less capable components. There isn’t specific a magical property of logic gates, nor of silicon (or doping) atoms, nor for that matter of elementary particles, that lets them do math - it’s by building a certain device out of them that composes their elementary interactions that we can make a tool for this. Whereas Searle seems to just reject this idea entirely, and believes that humans being conscious implies you can zoom in to some purely physical or chemical property and claim that it produces the consciousness. Needless to say, I don’t think that’s true.
Is it possible that someday we’ll make machines that think? Perhaps. But I think we first need to really understand how the human brain works and what thought actually is. We know that it’s not doing math, or playing chess, or Go, or stringing words together, because we have machines that can do those things and it’s easy to test that they aren’t thinking.
That was a common and reasonable position in, say, 2010, but the problem is: I think almost nobody in 2010 would have claimed that the space of things that you can make a program do without any extra understanding of thought included things like “write code” and “draw art” and “produce poetry”. Now that it has happened, it may be tempting to goalpost-move and declare them as “not true thought”, but the fact that nobody predicted it in advance ought to bring to mind the idea that maybe that entire line of thought was flawed, actually. I think that trying to cling to this idea would require to gradually discard all human activities as “not thought”.
it’s easy to test that they aren’t thinking.
And that’s us coming back around to the original line of argument - I don’t at all agree that it’s “easy to test” that even, say, modern LLMs “aren’t thinking”. Because the difference between the calculator example and an LLM is that in a calculator, we understand pretty much everything that happens and how arithmetic can be built out of the simpler parts, and so anyone suggesting that calculators need to be self-aware to do math would be wrong. But in a neural network, we have full understanding of the lowest layers of abstraction - how a single layer works, how activations are applied, how it can be trained to minimize a certain loss function via propagation - and no idea at all about how it works on a higher level. It’s not even “only experts do”, it’s that nobody in the world understands how LLMs work under the hood, why they have the many and specific weird behaviors they do. That’s concerning in many ways, but in particular I absolutely wouldn’t assume with little evidence that there’s no “self-awareness” going on. How would you know? It’s an enormous blackbox.
There’s this message pushed by the charlatans that we might create an emergent brain by feeding data into the right statistical training algorithm. They give mathematical structures misleading names like “neural networks” and let media hype and people’s propensity to anthropomorphize take over from there.
There’s certainly a lot of woo and scamming involved in modern AI (especially if one makes the mistake of reading Twitter), but I wouldn’t say the term “neural network” is at all confusing? I agree on the anthropomorphization though, it gets very weird. That said, I can’t help but notice that the way you phrased this message, it happens to be literally true. We know this because it already happened once. Evolution is just a particularly weird and long-running training algorithm and it eventually turned soup into humans, so clearly it’s possible.
Sure, in Firefox itself it wasn’t a severe vulnerability. It’s way worse on standalone PDF readers, though: