

you’ve commented twice in this thread?
you’ve commented twice in this thread?
is it still going with Bec Hill?
interestingly, too, this is a technique when you’re improvising songs, it’s called Target Rhyming.
The most effective way is to do A / B^1 / C / B^2 rhymes. You pick the B^2 rhyme, let’s say, “ibruprofen” and you get all of A and B^1 to think of a rhyme
Oh its Christmas time
And I was up on my roof when
I heard a jolly old voice
Ask me for ibuprofen
And the audience thinks you’re fucking incredible for complex rhymes.
if you have a soup of all liquids and a sieve that only lets coffee and ice cream through it produces coffee ice cream (metaphor, don’t think too hard about it)
that’s how gen ai works. each step sieves out raw data to get closer to the prompt.
Crack an egg on it
studies were done and found similar to what you’re saying.
also the secret listening does not comport with any of the business side of profile marketing, either. So it would have to be an incredibly well kept secret on top of all of that.
the idea that reality is subjective is relatively new
uhhh the parable of the cave?
they do, it’s just written down. IPA isn’t that hard to learn.
Wouldn’t you just take issue with whatever the new name for it was instead? “Calling it pattern recognition is snake oil, it has no cognition” etc
what would they have to produce to not be snake oil?
Can you name a company who has produced an LLM that doesn’t refer to it generally as part of “AI”?
can you name a company who produces AI tools that doesn’t have an LLM as part of its “AI” suite of tools?
What are you talking about? I read the papers published in mathematical and scientific journals and summarize the results in a newsletter. As long as you know equivalent undergrad statistics, calculus and algebra anyone can read them, you don’t need a qualification, you could just Google each term you’re unfamiliar with.
While I understand your objection to the nomenclature, in this particular context all major AI-production houses including those only using them as internal tools to achieve other outcomes (e.g. NVIDIA) count LLMs as part of their AI collateral.
I’ve been working on an internal project for my job - a quarterly report on the most bleeding edge use cases of AI, and the stuff achieved is genuinely really impressive.
So why is the AI at the top end amazing yet everything we use is a piece of literal shit?
The answer is the chatbot. If you have the technical nous to program machine learning tools it can accomplish truly stunning processes at speeds not seen before.
If you don’t know how to do - for eg - a Fourier transform - you lack the skills to use the tools effectively. That’s no one’s fault, not everyone needs that knowledge, but it does explain the gap between promise and delivery. It can only help you do what you already know how to do faster.
Same for coding, if you understand what your code does, it’s a helpful tool for unsticking part of a problem, it can’t write the whole thing from scratch
I will nationalize your service for free to everyone
presumably when they’re in sub mode?
I dont fault that really, saying good things when a company does good things is fairly normal, as is working for a company that doesnt do good things but you’ve got to have a job.
Waste is how you frame it.
Even literal poop has a benefit.
I do client work, sometimes it drives me mad how much time I “waste” making PPT slides that are just prettier BI dashboards, but then the client sees it, sends that one slide to his boss and everyone claps me on the back.
Mexico is a big country, Google has shareholders who demand line goes up, people use maps to advertise (“map pack” “local SEO” and brand tie-ins)…
I had a MacBook, while I was typing on it, suddenly just turn off, never to be turned on again. Took it to the store, they told me it was cheaper to buy a new one.