That’s assuming the employees give enough of a shit to pass the feedback on to the owners, and that the owners give enough of a shit to listen.
Yeah, it’s better if you make it known why you’re not giving them your business, but if it doesn’t appreciably impact their revenue then most owners won’t care either way.
My phone struggled to load the site to order a single cold brew, pop-ups to install the custom App kept obscuring the options, and I had to register with my phone number, email address, and first and last name to buy a $5 cup of coffee.
Then walk out. Don’t reward the bullshit with your money. The coffee shop ain’t gonna give a shit if you keep buying coffee just to go home and complain on your blog.
Fortunately, most of my family is so tech illiterate that even if a real video got out, I could just tell them it’s a deepfake and they’d probably believe me.
There’s a lot of insight here but I wonder if anyone will corroborate it. The author admits that the app they worked on wasn’t nearly as big as the likes of Tinder and Hinge so I wonder if the overall patterns are the same.
The review I linked quotes 5-8W under load so I’d expect it to be about 10 hours on the Framework 13’s 55Wh battery and about ~15h on the Framework 16’s 85Wh battery.
But it also can’t play a 1080p YouTube video worth a damn so it’s hard to imagine what you’d actually wanna use it that long for.
It is absolutely more of a development board than one meant even for early-bird adopters. The processing power is more on-par with a Raspberry Pi. Here’s a review of another development board using the same processor: https://bret.dk/risc-v-starfive-visionfive-2-review-jh7110/#Geekbench-6
Compare the Geekbench 6 scores to the Ryzen 7040HS in the Framework 16: https://browser.geekbench.com/v6/cpu/4260192
As the review author explains, Geekbench 6 is a bit unfair to the JH7110 since it’s missing some processor extensions, but even if we pretended it had a similar lead over the Pi 4 as it does on the Unixbench suite, it’d still be an order of magnitude behind the AMD processor.
You’re not really gonna be gaming on this thing, and you might not have a great experience even with normal desktop productivity software. These boards are likely gonna be relegated mostly to compiling code and running tests.
If a future revision is a little more powerful though, it could maybe make for a decent netbook. At just $200 it could also be a pretty good value for the education sector, maybe as a dev board for systems programming courses.
If any store starts requiring a fucking app to make a purchase, that store has permanently lost my business.
You have not earned the privilege of being installed on my phone. Get the fuck out of here.
You also wouldn’t have paid to use Honey.
That’s my point? Nothing is ever truly free?
I pay $100/month for internet access.
Lemmy may be free to access, but certainly not free to host. Am I paying for it personally? No, but someone is.
You also don’t see Lemmy paying hundreds of YouTubers and influencers for ad spots.
The very first time I saw an ad for Honey I knew there had to be a catch. Nothing is ever free.
It wasn’t immediately obvious how they were going to make money, though. I figured they’d just sell gather and sell user data. I had completely forgotten about affiliate links. But they probably also sell your data for good measure.
Fish is a great shell, but whenever I SSH into another machine I end up having to do everything in Bash anyway. So the fact that Fish is so different often ends up being a detriment, because it means I have to remember how to do things in two different shells. It was easier to just standardize on Bash.
I might try daily driving it again when this release hits the stable repos, I dunno.
A Linux distro with a great OOTB experience for gamers would go a long way.
Seems Overstreet is just pissy that he can’t talk to people on the kernel mailing list like it’s 2005 anymore. “Get the fuck out of here with this shit,” indeed.
so they wanted to sell Itanium for servers, and keep the x86 for personal computers.
That’s still complacency. They assumed consumers would never want to run workloads capable of using more than 4 GiB of address space.
Sure, they’d already implemented physical address extension, but that just allowed the OS itself to address more memory by enlarging the page table. It didn’t increase the virtual address space available to applications.
The application didn’t necessarily need to use 4 GiB of RAM to hit those limitations, either. Dylibs, memmapped files, thread stacks, various paging tricks, all eat up the available address space without needing to be resident in RAM.
Their last few generations of flagship GPUs have been pretty underwhelming but at least they existed. I’d been hoping for a while that they’d actually come up with something to give Nvidia’s xx80 Ti/xx90 a run for their money. I wasn’t really interested in switching teams just to be capped at the equivalent performance of a xx70 for $100-200 more.
This highlights really well the importance of competition. Lack of competition results in complacency and stagnation.
It’s also why I’m incredibly worried about AMD giving up on enthusiast graphics. I have very few hopes in Intel ARC.
Problem is, AI companies think they could solve all the current problems with LLMs if they just had more data, so they buy or scrape it from everywhere they can.
That’s why you hear every day about yet more and more social media companies penning deals with OpenAI. That, and greed, is why Reddit started charging out the ass for API access and killed off third-party apps, because those same APIs could also be used to easily scrape data for LLMs. Why give that data away for free when you can charge a premium for it? Forcing more users onto the official, ad-monetized apps was just a bonus.
These models are nothing more than glorified autocomplete algorithms parroting the responses to questions that already existed in their input.
They’re completely incapable of critical thought or even basic reasoning. They only seem smart because people tend to ask the same stupid questions over and over.
If they receive an input that doesn’t have a strong correlation to their training, they just output whatever bullshit comes close, whether it’s true or not. Which makes them truly dangerous.
And I highly doubt that’ll ever be fixed because the brainrotten corporate middle-manager types that insist on implementing this shit won’t ever want their “state of the art AI chatbot” to answer a customer’s question with “sorry, I don’t know.”
I can’t wait for this stupid AI craze to eat its own tail.
Most likely written down somewhere. The seed phrase is the backup method of storing a private key to a crypto wallet. You’re supposed to put it somewhere safe as a way to recover the wallet if the normal way to access it (a software app or a hardware device) fails.
Brute-forcing a full 12 or 24 word phrase would take centuries to millennia, so there’s only a few possibilities:
It’s clear they did not walk out.
(Emphasis mine.) This is from the very next paragraph after what I quoted.
You also clearly missed the point of my comment, which is that unless consumers start refusing to take this bullshit lying down, this stuff will be unavoidable in the future because there will be no other choices left.