Sure, but the location is only shared from your phone to the server when you call the emergency line so it’s not really relevant. It’s not continuously streaming the location data at all times.
NGram
- 0 Posts
- 16 Comments
This would only work if there’s one cell tower within range of the phone, which is unlikely. It also wouldn’t help much because you already can’t triangulate with just one tower.
Wikipedia says that enhanced 911 (e911) on cell phones can send location data, as collected by your phone, to a server which emergency operators can access. They can also use cell tower triangulation, but it appears that they don’t have to. And I’m pretty sure modern iPhones support e911
NGram@piefed.cato
Linux@lemmy.ml•Apple’s M3 Silicon Surrenders to Linux: A Technical Milestone for Open SourceEnglish
12·1 month agoThis reads like LLM slop.
According to technical reports from Phoronix, the milestone was reached by Alyssa Rosenzweig, a key figure in the graphics driver development for the Asahi Linux project.
The linked Phoronix article (published yesterday) credits Michael Reeves, noopwafel, and Shiz and does not mention Alyssa Rosenzweig at all.
The speed at which the M3 was tamed—booting into a KDE Plasma desktop environment so soon after the hardware’s retail release—
The M3 is two generations old at this point…
Booting a kernel is one thing; rendering a fluid graphical user interface is entirely another. The M3 achievement is particularly notable because it involves the GPU, historically the most obfuscated component of any System on Chip (SoC).
Again, the Phoronix article (and its linked Xwitter post) completely contradict this, saying instead the rendering is done with “LLVMpipe CPU-based software acceleration”. The GPU is only involved in so far as is necessary to send data to the display.
This article is misinformation, which is against this community’s rules.
The main post is already badly downvoted so I probably shouldn’t even bother to engage, but this whole article is actually just showing a lack of knowledge on the subject. So here goes nothing:
Corporations have been running algorithms for decades.
Millennia*. We can run algorithms without computers, so the first algorithm was run way earlier than decades ago. And corporations certainly were invented before the last century.
Markets weren’t inefficient because technology didn’t exist to make them efficient. Markets were asymmetrically efficient on purpose. One side had computational power. The other side had a browser and maybe some browser tabs open for comparison shopping.
I suppose the author has never used all of those price-watching websites that existed before 2022. I also question how they think a price optimization algorithm is useful to a person who is trying to buy, not sell, something.
Consider what it took to use business intelligence software in 2015. […] Language models collapsed that overhead to nearly zero. You don’t need to learn a query language. You don’t need to structure your data. You don’t need to know the right technical terms. You just describe what you want in plain English. The interface became conversation.
You still need to structure your data because you need to be able to have the LLM understand the structure of your data. In fact, it is still easy enough to cause an LLM to misinterpret data that having inconsistently-structured data is just asking for problems… not that LLMs are consistent anyway. The existence of the idea of prompt engineering means that the interface isn’t just conversation.
The moment ChatGPT became public, people started using it to avoid work they hated. Not important work. Not meaningful work. The bureaucratic compliance tasks that filled their days without adding value to anything.
Oh ok better just stop worrying about that compliance paperwork because the author says it’s worthless. Just dump that crude oil directly on top of the nice ducks, no point in even trying to only spill it into their pond.
Compliance tasks are actually the most important part of work. They are what guarantee your work has worth. Otherwise you’re just an LLM – sometimes producing ok results but always wasting resources.
People weren’t using ChatGPT to think. They were using it to stop pretending that performance reviews, status update emails, and quarterly reports required thought.
Basically, users used it to create the layer of communication that existed to satisfy organizational requirements rather than to advance any actual goal.
Once again with the poor examples of things. If you can’t give a thoughtful performance review for the people who work below you, you’re just horrible at your job. Performance reviews aren’t just crunching some numbers and giving people a gold star. I’m sure sometime in the future I could pipe in all of the quick chats I’ve had with coworkers in the office and tell an LLM to consider them for generating a review, but that’s still not possible. So no, performance reviews do actually require thought. Status emails and quarterly reports can be basically summarizing existing data, so maybe they don’t require much thought but they still require some. This is demonstrable by the amount of clearly LLM-generated content that have become infamous at this point for containing inaccurate info. LLMs can’t think, but a thinking human could’ve reviewed that output and stopped that content from ever reaching anyone else.
This is very much giving me the impression the author doesn’t like telling others what they’re doing. They’d rather work alone and without interruption. I worry that they don’t work well in teams since they lack the willingness to communicate with their peers. Maybe one day they’ll realize that their peers can do work too and even help them.
You want the cheapest milk within ten miles? You can build that.
The first search result for “grocery price tracker” that I found is a local tracker started in 2022, before LLMs.
You want to track price changes across every retailer in your area? You can do that now
From searching “<country> price tracker”, I found Camel^3 which is famous for Amazon tracking and another country-specific one which has a ToS last updated in 2018. The author is describing things that could already be accomplished with a search engine.
You want something to read every clause of your insurance policy and identify the loopholes?
Lmao DO NOT use an LLM for this. They are not reliable enough for this.
You want an agent that will spend forty hours fighting a medical billing error that you’d normally just pay because fighting it would cost more in time than the bill? You can have that.
You know what? I take it all back, this is definitely proving Dystopia Inc. But seriously, that is a temporary solution to a permanent problem. Never settle for that. The real solution here is to task the LLM with sending messages to every politician and lobbyist telling them to improve the system they make for you.
The marginal cost of algorithmic labor has effectively collapsed. Using a GPT-5.2–class model, pricing is on the order of $0.25 per million input tokens and about $2.00 per million output tokens. A token is roughly three-quarters of a word, which means one million tokens equals about 750,000 words. Even assuming a blended input/output cost of roughly $1.50 per million tokens, you can process 750,000 words for about $1.50. War and Peace is approximately 587,000 words, meaning you can run an AI across one of the longest novels ever written for around a dollar. That’s not intelligence becoming cheaper. That’s the marginal cost of cognitive labor approaching zero.
Nevermind the irony of calling computers doing work “algorithmic labour”, this is just nonsense. Of course things built entirely on free labour are going to be monetarily cheap. Also, feeding War And Peace into an LLM as input tokens is not the same as training the LLM on it.
We are seeing the actual cost of LLM usage unfold and you’d have to be willingly ignoring it to think it was strictly monetary. The social and environmental impact is devastating. But since the original article cites literally none of its claims, I won’t bother either.
Institutions built their advantages on exhaustion tactics. They had more time, more money, and more stamina than you did. They could bury you in paperwork. They could drag out disputes. They could wait you out. That strategy assumed you had finite patience and finite resources. It assumed you’d eventually give up because you had other things to do.
An AI assistant breaks that assumption.
No, it doesn’t, unless you somehow also assume that LLMs won’t also be used against you. And you’d have to actually be dumb or have an agenda that required you to act dumb to assume that.
Usage numbers tell the story clearly. ChatGPT reached 100 million monthly active users in two months. That made it the fastest-growing consumer application in history. TikTok took nine months to hit 100 million users. Instagram took two and a half years. The demand was obviously already there. People were apparently just waiting for something like this to exist.
Here’s a handy little graph to show how the author is wrong: Time to 100M users. I’m sorry, I broke my promise about not citing anything. Notice how all of the time spans for internet applications trend downwards as time increases. TikTok took 9 months 7 years before ChatGPT was released. I bet the next viral app will be even faster than ChatGPT. That’s not an indicator of demand, that’s an indicator of internet accessibility. (I’m ignoring Threads because they automatically create 100M users from their Instragram accounts in 5 days, which is a measure of their database migration capabilities and nothing else.)
Venture capital funding for generative AI companies reached $25.2 billion in 2023 according to PitchBook data. That was up from $4.5 billion in 2022. Investment wasn’t going into making better algorithms. It was going into making those algorithms accessible.
I’m sorry, what? LLMs are an algorithm. Author clearly does not know what they are talking about.
DoNotPay, an AI-powered consumer advocacy service, claimed to help users fight more than 200,000 parking tickets before the company pivoted to other services. LegalZoom reported that AI-assisted document preparation reduced the time required to create basic legal documents by 60% in 2023.
I thought LLMs were supposed to be some magic interface for individuals. The author is describing institutions. You know, the thing the author started out bashing for controlling all the algorithms and using them against the common folk who didn’t have those algorithms. This is exactly the same thing, just replace algorithm with AI.
The credential barrier still exists. You can’t get a prescription from ChatGPT. The legal liability still flows through licensed professionals. The system still requires human gatekeepers. The question is how long those requirements survive when the public realizes they’re paying $200 for a consultation that an AI handles better for pennies.
Indeed, that will be an interesting thing to see once AI can actually handle it better and for cheaper. Though I wouldn’t count on in anytime soon. Don’t forget the AI at that stage will still have to compensate the human doctors who wrote the data it was trained on.
Oh, I just about hit the character limit. I guess I’ll stop there.
Remember folks, don’t let your LLM write an article arguing for replacing everyone with LLMs. All it proves is that you can be replaced by an LLM. Maybe focus on some human pursuits instead.
NGram@piefed.cato
Technology@lemmy.world•How One Uncaught Rust Exception Took Out CloudflareEnglish
26·4 months agoNo, the article is just not very precise with its words. It was causing the program to panic.
NGram@piefed.cato
Selfhosted@lemmy.world•GlitchTip 5.2 with design refresh and less system requirementsEnglish
4·4 months agoGlitchTip makes monitoring software easy. Track errors, monitor performance, and check site uptime all in one place. Our app is compatible with Sentry client SDKs, but easier to run.
For those that have no idea what GlitchTip is, it’s a service tracing service like Sentry.
NGram@piefed.cato
Technology@lemmy.world•On January 1st of 2026, Texas will be required to give ID to download apps from the app stores. It doesn't matter if it's NSFW or not.English
18·5 months agoThey do use handheld and never define it, but I can hold my laptop with my hand so I’m not sure that’s necessarily a good way of disqualifying laptops. That also seems to strictly apply to the operating system (“runs an operating system designed […] for software applications on handheld electronic devices”), which might be a fun legal quagmire as well since Linux is designed for all sorts of platforms. If I install Linux on my (formerly) Windows laptop does it suddenly become a mobile device?
It does bring up another interesting niche of computers: handheld PCs, especially handheld gaming PCs. Does this law apply to Steam Decks?
This whole thing screams “written by tech illiterates” since it seems to ignore regular computers and only focus on phones when it’s all just variations of the same thing – form factor and the software running on top isn’t very relevant to whatever goal I presume they’re trying to achieve. If they really want to collect everyone’s ID, age, and other privacy-violating information they’d be better off doing it everywhere. But maybe I shouldn’t give out advice for speed running fascism…
NGram@piefed.cato
Technology@lemmy.world•On January 1st of 2026, Texas will be required to give ID to download apps from the app stores. It doesn't matter if it's NSFW or not.English
10·5 months agoThat was my interpretation too, except not restricted to “modern” websites. It sounds more like any website, modern or not, JS or not.
The part that is funny in that situation is that probably means web browsers are considered “app stores”. From a technical standpoint that’s actually pretty accurate (though they also handle running the “app”, unlike a regular app store), but has the fun consequence of making web browsers also “app store stores”. Most browsers can be used without an account though, so I look forward to the dumb antics companies with large legal departments come up with for this one.
NGram@piefed.cato
Technology@lemmy.world•On January 1st of 2026, Texas will be required to give ID to download apps from the app stores. It doesn't matter if it's NSFW or not.English
911·5 months agoI’ve got no clue about legal documents, especially how they work in Texas, but this seems weirdly broad and with a pretty glaring loophole.
The weirdly broad part:
(2) “App store” means a publicly available Internet website, software application, or other electronic service that distributes software applications from the owner or developer of a software application to the user of a mobile device.
This sounds like any website suddenly becomes an app store as soon as it starts distributing software for a mobile device. So (ignoring my following point), if I suddenly post my new APK on my personal site suddenly it’s an app store!? Also aren’t websites software applications? That’ll be a fun one to fight out with browsers…
(4) “Mobile device” means a portable, wireless electronic device, including a tablet or smartphone, capable of transmitting, receiving, processing, and storing information wirelessly that runs an operating system designed to manage hardware resources and perform common services for software applications on handheld electronic devices.
This sounds like it includes laptops but not desktop computers.
The glaring loophole:
(a) When an individual in this state creates an account with an app store, the owner of the app store shall use a commercially reasonable method of verification to verify the individual’s age category under Subsection (b).
So if your app store does not require an account, you do not need to verify anyone’s age!? I’m all for it but that doesn’t seem to be in the spirit of the law. F-droid and my (example) personal-site-turned-app-store rejoice!
More features that are preparing for full federation support! Exciting!
NGram@piefed.cato
Technology@lemmy.world•Nvidia sells tiny new computer that puts big AI on your desktopEnglish
3·5 months agoUnfortunately Nvidia is also big tech so starving out (sort of) competitors doesn’t help get rid of douchebags. It actually has the added risk of giving some of the douchebags a monopoly.
Buying one of those AMD Ryzen AI Max chips actually makes more sense now…
NGram@piefed.cato
Technology@beehaw.org•Microsoft blocks Israel’s use of its technology in mass surveillance of PalestiniansEnglish
7·5 months agoApparently genocide is ok but mass surveillance isn’t, so they’ll continue to do business with Israel but just won’t let one unit use it.
Microsoft must completely stop all service to everyone in Israel. Anything less is still supporting genocide.
NGram@piefed.cato
Technology@lemmy.world•Exclusive: San Francisco scores long-term conference commitment from VisaEnglish
2·6 months agoI hope the Moscone Center didn’t have to ban all LGBTQ+ and/or 18+ events to get Visa to commit
It’s never really been about upfront price so much as longevity. If you can avoid a laptop upgrade e.g. every 5 years by upgrading just a few components instead, it’ll last you longer and cost you less longterm.
Fundamentally, the cheapest way to build electronics is with very little modularity. Making parts swappable is more complicated to design and needs more components to be included. Both drive up the cost of the product.
No sweat if it’s too expensive or that’s not what you care about (ok, though you should sweat not caring about longevity), but making it all about the price is sort of missing the point. Capitalism is a tool for improving our lives but is not the only tool for that.

i you hate?