

Ah yes, one of my favourite quotes by Orreleeise: “Overcomine challenges and oeeence ine teisge and rivively renence verover re rescience”
Ah yes, one of my favourite quotes by Orreleeise: “Overcomine challenges and oeeence ine teisge and rivively renence verover re rescience”
I’ve never heard of Macs running embedded systems - I think that would be a pretty crazy waste of money - but Mac OS Server was a thing for years. My college campus was all Mac in the G4 iMac days, running MacOS Server to administer the network. As far as I understand it was really solid and capable, but I guess it didn’t really fit Apples focus as their market moved from industry professionals to consumers, and they killed it.
🛼 Yeah, RISC is good ⚗️🔥
Oh ouch. Haven’t experienced that.
This used to happen to me regularly with a Dell panel. It would turn anything white pink. I found creating a custom colour profile and playing around with it until the whites were white again solved it. Then occasionally it would decide to revert to the default colour profile for no reason.
Stupidly frustrating but I’m passing on the tip incase it helps.
If it’s trained on the average Reddit reply: $420.69, nice.
Any platform has vulnerability to exploit to some degree. But this article is about piggybacking on the Find My network to transmit data without actually compromising the network. It’s a clever technique, and worth reading more than the headline.
I’ve previously argued that current gen “AI” built on transformers are just fancy predictive type, but as I’ve watched the models continue to grow in complexity it does seem like something emergent that could be described as a type of intelligence is happening.
These current transformer models don’t possess any concept of truth and, as far as I understand it, that is fundamental to their nature. That makes their application severely more limited than the hype train suggests, but that shouldn’t undermine quite how incredible they are at what they can do. A big enough statistical graph holds an unimaginably complex conceptual space.
They feel like a dream state intelligence - a freewheeling conceptual synthesis, where locally the concepts are consistent, while globally rules and logic are as flexible as they need to be to make everything make sense.
Some of the latest image and video transformers, in particular, are just mind blowing in a way that I think either deserves to be credited with a level of intelligence, or should make us question more deeply what we means by intelligence.
I find dreams to be a fascinating place. It often excites people to thing that animals also dream, and I find it as exciting that code running on silicon might be starting to share some of that nature of free association conceptual generation.
Are we near AGI? Maybe. I don’t think that a transformer model is about to spring into awareness, but maybe we’re only a few breakthroughs away from a technology which will pull all these pieces off specific domain AI together into a working general intelligence.