The relationship correlation data makes a lot of sense if only from a bandwidth perspective.
Correct. I can definitively say “I don’t know how this happened.” But I do know it creeps me out and spurs me to speed up my privacy efforts.
@Marty_Man_X@lemmy.world and @TORFdot0@lemmy.world both make great points, both of which can certainly explain the sudden change in suggestions.
Anecdote: (a little background) I don’t typically deal with narcissistic people; I’m not troubled by narcissists in my life. My tech life is pretty well locked down, but it could always be better (working on it). And my YouTube suggestions are tightly, carefully curated to topics pertinent to my professional and personal projects.
I had an utter piece of shit contractor working for me on a project; he was a grifting, conniving, manipulative shitbag. When I outright fired his ass, he first got all self-righteous then tried to play the victim, but I wasn’t playing any of his games. My phone was sitting on the workbench next to me.
The next day, I opened YouTube because an engineer I know told me he dropped a new video on software we recently discussed. There among my suggestions were a bunch of videos on how to deal with narcissists. So somehow, in only talking with the contractor (he doesn’t use email, text, or other electronic communications), YouTube decided I was curious about dealing with narcissism. I’m morbidly curious how YouTube made that decision, and whether it was audio or “we know you’re associating with this guy who we identify as a problematic narcissist and here are some resources.”
Now, I’m just some douchecanoe on the internet and you should probably dismiss me based on that alone. But GODDAMN, the data points sure do pile up quickly on how deeply we’re being surveilled.
Per your first comment, I played around with the Vowel Filter in Grid. That certainly does seem to be a factor! Thank you!
Thanks. I was aiming for the learning process. I want to be able to hear or imagine a sound and then start linking up the oscillators and filters to get that specific sound. My end goal is to be all like… oh, take a square wave modulated by a sawtooth with a 4-pole notch here tumble-dried with an LFO insert, then hit the unison with muffler bearings and blinker fluid. Bam, There’s the sound I wanted. But with less nonsense.
One would develop Popeye forearms gaming on that thing. Get in your arm, neck, and shoulder day while gaming!
I had a Toshiba Satellite around the time this was out. It weighed 12 pounds. That millstone went everywhere with me. Now my laptop weighs about six pounds minus the brick, and I might carry it from my desk to the settee. I look back at what our devices used to be and always think “Damn, I’ve gotten soft!”
Hello (former) fellow Lehi worker! Although I was remote except for the onsite weeks. I’m not a fan of 99% mobile apps, maybe more than 99%. I didn’t work on mobile, but I am quite sure that it is in fact a PWA.
Different financial institutions (FI) will all have different appearances, because of the nature of how MX is implemented, and whether on desktop or mobile. In the case of my credit union, it’s right here:
The interface of MX Platform on desktop looks like this:
You might see something like this in your online banking home page:
There are two ways that MX can get data from other accounts which you have to explicitly link in your bank/CU interface. The first method is through Open Banking protocols, which are mercifully obfuscated from the end user. Seriously, if you’re having trouble sleeping, try reading some of the Open Banking specifications. :D One selects their FI from the list, and enters creds and 2FA challenge. The other method is screen-scraping, but again this is abstracted away from the end user.
One of the features where MX slaps more than anyone else (for now) is identifying the source of debits and classifying them. Underneath the hood, debit and credit card transaction strings are chaos. But even if MX gets it wrong, you can manually re-classify your expenses, and it will apply that to future transactions (optional). I already mentioned the burndowns, but if you have an idea for a saving schedule, MX will provide reminders and factor in your growth. Platform will also provide reminders for almost everything.
Let me know if you have any other questions.
Sure thing. On which part would you like more detail?
Negative all around. I was replying to OP. The company to which I referred is MX. The public-facing product (API) is actually called Platform, but it’s very explicitly white label software. Customers will generally have little to no idea that they are using MX Platform. It might actually say MX somewhere, but that can be eliminated in implementation.
As others have said, a spreadsheet is the simplest. If you do your banking with a credit union, chances are they make MX available to you in your online banking. A lot of banks use MX too. Their software provides the projections and forecasting you seek, as well as Open Banking connections to all of your other accounts. If you have loans, it also has burndowns of outstanding debts. Extra bonus: MX doesn’t sell your data.
Disclosure: I used to work for MX.
You raise good points. Thank you for your replies. All of this still requires planet-cooking levels of power for garbage and to hurt workers.
And an additional response, because I didn’t fully answer your question. LLMs don’t reason. They traverse a data structure based on weightings relative to the occurrence frequency in their training content. Loosely speaking, it’s a graph (https://en.wikipedia.org/wiki/Graph_(abstract_data_type)). It appears like reasoning because the LLM is iterating over material that has been previously reasoned out. An LLM can’t reason through a problem that it hasn’t previously seen unlike, say, a squirrel.
By the same logic, raytracing is ancient tech that should be abandoned.
Nice straw man argument you have there.
I’ll restate, since my point didn’t seem to come across. All of the “AI” garbage that is getting jammed into everything is merely scaled up from what has been before. Scaling up is not advancement. A possible analogy would be automobiles in the late 60s and 90s: Just put in more cubic inches and bigger chassis! More power from more displacement does not mean more advanced. Continuing that analogy, 2.0L engines cranking out 400ft-lb and 500HP while delivering 28MPG average is advanced engineering. Right now, the software and hardware running LLMs are just MOAR cubic inches. We haven’t come up with more advanced data structures.
These types of solutions can have a place and can produce something adjacent to the desired results. We make great use of expert systems constantly within narrow domains. Camera autofocus systems leap to mind. When “fuzzy logic” autofocus was introduced, it was a boon to photography. Another example of narrow-ish domain ML software is medical decision support software, which I developed in a previous job in the early 2000s. There was nothing advanced about most of it; the data structures used were developed in the 50s by a medical doctor from Columbia University (Larry Weed: https://en.wikipedia.org/wiki/Lawrence_Weed). The advanced part was the computer language he also developed for quantifying medical knowledge. Any computer with enough storage, RAM, and the hardware ability to quickly traverse the data structures can be made to appear advanced when fed with enough collated data, i.e. turning data into information.
Since I never had the chance to try it out myself, how was your neural network and LLMs reasoning back in the day? Imo that’s the most impressive part, not that it can write.
It was slick for the time. It obviously wasn’t an LLM per se, but both were a form of LM. The OCR and auto-suggest for DOS were pretty shit-hot for x386. The two together inspried one of my huge projects in engineering school: a whole-book scanner* that removed page curl and gutter shadow, and then generated a text-under-image PDF. By training the software on a large body of varied physical books and retentively combing over the OCR output and retraining, the results approached what one would see in the modern suite that now comes with your scanner. I only achieved my results because I had unfettered use of a quad Xeon beast in the college library where I worked. That software drove the early digitization processes for this (which I also built): http://digitallib.oit.edu/digital/collection/kwl/search
*in contrast to most book scanning at the time, which required the book to be cut apart and the pages fed into an automatically fed scanner; lots of books couldn’t be damaged like that.
Edit: a word
No, no they’re not. These are just repackaged and scaled-up neural nets. Anyone remember those? The concept and good chunks of the math are over 200 years old. Hell, there was two-layer neural net software in the early 90s that ran on my x386. Specifically, Neural Network PC Tools by Russell Eberhart. The DIY implementation of OCR in that book is a great example of roll-your-own neural net. What we have today, much like most modern technology, is just lots MORE of the same. Back in the DOS days, there was even an ML application that would offer contextual suggestions for mistyped command line entries.
Typical of Silicon Valley, they are trying to rent out old garbage and use it to replace workers and creatives.
keeping a product listed that they know is not safe.
Amazon wouldn’t do THAT, would they?
Oh right, they would. https://youtu.be/B90_SNNbcoU And not only would they continue to sell the item, but suppress reviews pointing out the issues.
Anecdotally, six years ago I purchased Ancor marine wiring crimps and 314 stainless steel bolts through Amazon. The crimps were counterfeit garbage and the stainless steel rusted and galled in about two weeks of saltwater exposure. Amazon’s response was basically “contact the manufacturer for warranty.” A quick glance at Amazon listings and it’s clear things have gone further downhill since.
So I regard Amazon doubling down on supply chain fuckery as a net win. I will never shop there again after that hardware BS. And more people will come to the same conclusion that Amazon is quickly becoming the Dollar General of online sales. Add on their shitty treatment of sellers, and good manufacturers go elsewhere, further accelerating the decline.
Which is why they’re not people.
But the C-suite and board are almost like humans. And that’s even better for… things.
They were acquired by Opta Group in 2023. Since then, the quality has declined while prices increased. And around the time of their acquisition, they started doing some shady stuff when claiming USB-IF compliance. The cables were blatantly not USB-IF compliant.
Another example: I personally love my Anker GaN Prime power bricks and 737. Unfortunately, among my friends and peers, I am the exception. The Prime chargers are known for incorrectly reading cable eMarkers and then failing to deliver the correct power. This has so far been an issue for me twice, but was able to be worked around.
This is absolutely by design. The corporate raider playbook is well-read. See: Sears, Fluke, DeWalt, Boeing, HP, Intel, Anker, any company purchased by Vista (RIP Smartsheet, we barely knew ye), and so on. Find a brand with an excellent reputation, gut it, strip mine that goodwill, abandon the husk on a golden parachute, and make sure to not be the one holding the bag.
What you propose is simple (as in simplistic), but far from easy. Content moderation at scale is extremely difficult, if not impossible. See “Masnick’s Impossibility Theorem.”
Also, deplatforming bigots is difficult and ineffective: