• 0 Posts
  • 344 Comments
Joined 1 year ago
cake
Cake day: December 6th, 2024

help-circle




  • Also to add to this, the life-cycle of a TV display is mismatched from the live-cycle of media playing hardware or just hardware for general computing: one needs to update the latter more often in order to keep up with things like new video codecs (as for performance those things are actually implemented in hardware) as well as more in general to be capable of running newer software with decent performance.

    I’ve actually had a separate media box for my TV for over a decade and in my experience you go through 3 or 4 media boxes for every time you change TVs, partly because of new video codes coming out and partly because the computing hardware for those things is usually on the low-end so newer software won’t run as well. In fact I eventually settled down on having a generic Mini-PC with Linux and Kodi as my media box (which is pretty much the same to use in your living room as a dedicated media box since you can get a wireless remote for it, so no need for a keyboard or mouse to use it as media player) and it doubles down as a server on the background (remotely managed via ssh), something which wouldn’t at all be possible with computing hardware integrated in the TV.

    In summary, having the computing stuff separate from the TV is cheaper and less frustrating (you don’t need to endure slow software after a few years because the hardware is part of an expensive TV that you don’t want to throw out), as well as giving you far more options to do whatever you want (lets just say that if your network connected media box is enshittified, it’s pretty cheap to replace it or even go the way I went and replace it with a system you fully control)


  • Only in “international university rankings” that treat essentially classes being given in the English language as about 1/3 of the score or as they call it, “easiness for international students”.

    Or in other words, “for international students” they’re one of the best in the World, to a large extent because all lessons are in English so all else being the same, universities in English-speaking countries will always come above universities in non-English speaking countries because English is the main Lingua Franca at the moment.

    Also a lot of the other quality metrics (such as number of published papers) actually measure research proeficiency rather than teaching quality, which whilst relevant for post-grads, isn’t quite as relevant for most students.

    Whether if measured from the point of view of the main student community they serve rather than “international post-grad student” MIT is the best in the World, is unclear.





  • AI isn’t at all reliable.

    Worse, it has a uniform distribution of failures in the domain of seriousness of consequences - i.e. it’s just as likely to make small mistakes with miniscule consequences as major mistakes with deadly consequences - which is worse than even the most junior of professionals.

    (This is why, for example, an LLM can advise a person with suicidal ideas to kill themselves)

    Then on top of this, it will simply not learn: if it makes a major deadly mistake today and you try to correct it, it’s just as likely to make a major deadly mistake tomorrow as it would be if you didn’t try to correct it. Even if you have access to actually adjust the model itself, correcting one kind of mistake just moves the problem around and is akin to trying to stop the tide on a beach with a sand wall - the only way to succeed is to have a sand wall for the whole beach, by which point it’s in practice not a beach anymore.

    You can compensate for this by having human oversight on the AI, but at that point you’re just back to having to pay humans for the work being done, so now instead of having to the cost of a human to do the work, you have the cost of the AI to do the work + the cost of the human to check the work of the AI and the human has to check the entirety of the work just to make sure since problems can pop-up anywere, take and form and, worse, unlike a human the AI work is not consistent so errors are unpredictable, plus the AI will never improve and it will never include the kinds of improvements that humans doing the same work will over time discover in order to make later work or other elements of the work be easier to do (i.e. how increase experience means you learn to do little things to make your work and even the work of others easier).

    This seriously limits the use of AI to things were the consequences of failure can never be very bad (and if you also include businesses, “not very bad” includes things like “not significantly damage client relations” which is much broader than merely “not be life threathening”, which is why, for example, Lawyers using AI to produce legal documents are getting into trouble as the AI quotes made up precedents), so mostly entertainment and situations were the AI alerts humans for a potential situation found within a massive dataset and if the AI fails to spot it, it’s alright and if the AI incorrectly spots something that isn’t there the subsequent human validation can dismiss it as a false positive (so for example, face recognition in video streams for the purpose of general surveillance, were humans watching those video streams are just or more likely to miss it and an AI alert just results in a human checking it, or scientific research were one tries to find unknown relations in massive datasets)

    So AI is a nice new technological tool in a big toolbox, not a technological and business revolution justifying the stock market valuations around it, investment money sunk into it or the huge amount of resources (such as electricity) used by it.

    Specifically for Microsoft, there doesn’t really seem to be any area were MS’ core business value for customers gains from adding AI, in which case this “AI everywhere” strategy in Microsoft is an incredibly shit business choice that just burns money and damages brand value.




  • Well informed people knew that it wasn’t safe already for quite a while.

    Most people did not, most companies did not, most public institutions either did not or could make believed they did not.

    That’s changing (as are lots of other things) because Trump is being far more loud about how Europe is an adversary of America than previous administrations (it was too for Democrats, though only on business and trade terms)

    There was quite a lot of fighting against treating America as a safe haven for the data of Europeans from people in the know in Tech and IT Security in Europe but we lost, but now crooked politicians can’t make believe America or American companies are safe for the data of Europeans anymore.


  • Europe just did a 180 on the commitment for no ICE cars to be sold from 2035 onwards under pressure of just a handful of big automakers.

    And when I say Europe, I actually mean crooked European politicians rather than the public in general.

    I mean, even if one puts the aside the whole strategical point of Europe delaying even more commiting to the first big tech revolution of the 21st century so that a handful of large automakers make a little bit more profit, there are actually lives as stake: fumes for diesel cars are estimated to kill more than 10,000 people a year in Europe.

    Corruption in politics is both killing people and fucking up our future prosperity.


  • Yes, China has very purposefully put itself at the forefront of the first technological revolution of the 21st century and done this at multiple levels (solar panel production, battery tech, EVs)

    Meanwhile the American elites have decided that 19th century technology is were they want to be. Well, that and dead ending killing the country’s lead in the Tech revolution by going down a branch with no future in the form of LLMs and making everybody lose trust in keeping their data in anything owned by American companies.

    And, of course, the crooked politicians here in Europe are actually following America more than China in this.



  • It’s not at all surprising that fatcats looks at the juicy profits that Apple makes with their iOS closed garden and think “I want me some of that” - wanting to be a monopolist with captive customers makes the most business sense and is the most natural thing in a Capitalist Economic and Political environment.

    Most of the economic activity around Technology nowadays is rent-seeking and only the part which isn’t at all about money - open source - isn’t about corraling people into closed spaces, removing their choices and then extracting the most money possible from people who now have no other option.

    It’s kinda like 20 or 30 years ago when Banks looked at cash payments and thought that they should find a way to get comissions on those, same as they got with card payments, so already back they they were pushing things like electronic wallets (back then those were basically a special kind of card) and keep pushing it for decades (often with the support of governments, since 100% electronic payments are great for civil society surveillance), and nowadays in some countries there are pretty much no cash payments so that relentless push for controlling and getting a cut of every single trade has worked in those countries (and people in those places, such as Sweden, having traded a small hidden increase in price - due to banks now getting comissions in everything - and huge loss of privacy for a tiny bit of convenience genuinelly think they’re better of).

    So yeah, these software fatcats will totally try and get together with hardware makers with a dominant market position to slowly close down PC technology - for example the whole point of TPM is to take control away from the owners of the hardware and the “trusted” in “trusted platform” (aka TPM) isn’t about it being trusted by the owner of the hardware, it’s about it being trusted by the business selling the OS, who in turn can sell access to the thus gatekept environment to software making businesses.

    I believe the whole requirement for TPM 2.0 in Windows 11 even though it doesn’t actually need it is just a step in a broader strategy to turn PCs into a closed platform controlled by Microsoft, whilst as we see here other companies are trying to created closed platforms by having everything run in their servers, like Google tried almost a decade ago for games with Stadia and was also tried 2 or 3 decades ago by the likes of Sun Microsystems with the push for Thin Clients.


  • Whilst I have no evidence for it (it’s not like we have an alternate timeline to compare to), I believe that the changes to Intellectual Property legislation in the last couple of decades have actually slowed down innovation, probably severely so.

    Certainly in Tech it feels like there’s less of a culture of tinkering and hacking (in the original sense of the word) nowadays than back in the 80s and 90s, even though with the Internet and the easy access to information on it one would expect the very opposite.

    Instead of countless crazy ideas like in the age of the generalisation of computing, open source and the birth of the Internet, we instead have closed environments gatekept by large companies for the purposed of extracting rents from everybody, all of which made possible by bought for legislation to stop users from breaking out and competitors from breaking in.

    I mean, outside the natural process of moving everything done before from analog to digital-online (i.e. a natural over time migration to the new environments made available by the inventions of computing and the global open network from the late part of the XX Century) the greatest “innovations” in Tech of the last 30 years were making computers small enough to fit in your pocket (i.e. smartphones) - a natural consequence of the Moore Law - and a digital parrot/mediocre content generator.

    Now wonder that China, with their “we don’t give a shit about IP” posture has powered through from Tech backwater to taking the lead from the West on various technologies (first solar, now EVs) even though (from what I’ve heard) their educational systems doesn’t reward innovative thinking.

    So in my view only if Europe ditches the IP legislation pushed by the US in Trade Treaties does it have a chance to be part of any upcoming Tech revolutions rather than stagnating right alongside in the US whilst trying to extract ever diminishing rents from the tail ends of the adoption phases of last century’s technologies.



  • God forbid people want the compute they are paying for to actually do what they want, and not work at cross purposes for the company and its various data sales clients.

    I think that way of thinking is still pretty niche.

    Hope it’s becoming more widespread, but in my experience most people don’t actually concern themselves with “my device does some stuff in the background that goes beyond what I want it for” - in their ignorance of Technology, they just assume it’s something that’s necessary.

    I think were people have problems is mainly at the level of “this device is slower at doing what I want it to do than the older one” (for example, because AI makes it slower), “this device costs more than the other one without doing what I want it to do any better” (for example, they’re unwilling to pay more for the AI functionality) or “this device does what I want it to do worse than before/that-one” (for example, AI is forced on users, actually making the experience of using that device worse, such as with Windows 11).