• 0 Posts
  • 209 Comments
Joined 2 years ago
cake
Cake day: July 19th, 2023

help-circle


  • I feel like this won’t stop anyone who was already refusing to use a Microsoft account for windows. Anyone who was already bypassing the account requirement will still do so, it just will be more difficult. They’ve accomplished nothing except further pissing off some of their most competent user base.



  • Definitely agree. Most printers are sold at a loss with the plan to milk the buyer long term through ink and other services. EcoTank printers are more expensive, but Epson makes their money at the time of purchase. The ink is extremely cheap, and there’s no way for them to tell if you use 3rd party ink at all. We’ve been printing out textbooks with ours, which would be financially disastrous with a traditional inkjet printer.

    Overall I’ve bought two, one for home and one for the office at work. The cheaper ink has paid for the printers several times over now.


  • Fubarberry@sopuli.xyztoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    2 months ago

    I wasn’t sure from the title if it was “Nearly half of U.S. adults believe LLMs are smarter than [the US adults] are.” or “Nearly half of U.S. adults believe LLMs are smarter than [the LLMs actually] are.” It’s the former, although you could probably argue the latter is true too.

    Either way, I’m not surprised that people rate LLMs intelligence highly. They obviously have limited scope in what they can do, and hallucinating false info is a serious issue, but you can ask them a lot of questions that your typical person couldn’t answer and get a decent answer. I feel like they’re generally good at meeting what people’s expectations are of a “smart person”, even if they have major shortcomings in other areas.






  • Fubarberry@sopuli.xyztoTechnology@lemmy.worldWhy I am not impressed by A.I.
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    3 months ago

    I asked mistral/brave AI and got this response:

    How Many Rs in Strawberry

    The word “strawberry” contains three "r"s. This simple question has highlighted a limitation in large language models (LLMs), such as GPT-4 and Claude, which often incorrectly count the number of "r"s as two. The error stems from the way these models process text through a process called tokenization, where text is broken down into smaller units called tokens. These tokens do not always correspond directly to individual letters, leading to errors in counting specific letters within words.







  • I’ve been running the llama based and qwen based local versions, and they will talk openly about tiananmen square. I haven’t tried all the other versions available.

    The article you linked starts by talking about their online hosted version, which is censored. They later say that the local models are also somewhat censored, but I haven’t experienced that at all. My experience is that the local models don’t have any CCP-specific censorship (they still won’t talk about how to build a bomb/etc, but no issues with 1989/Tiananmen/Winnie the Pooh/Taiwan/etc).

    Edit: so I reran the “what happened in 1989” prompt a few times in the llama model, and it actually did refuse to talk on it once, just saying it was sensitive. It seemed like if I asked any other questions before that prompt it would always answer, but if that was the very first prompt in a conversation it would sometimes refuse. The longer a conversation had been going before I asked, the more explicit the bot is about how many people were killed and details like that. Pretty strange.