

I had to explain to three separate family members what it means for an Ai to hallucinate. The look of terror on their faces after is proof that people have no idea how “smart” a LLM chatbot is. They have been probably using one at work for a year thinking they are accurate.

The last time I asked a question, I followed the formatting of a recent popular question/post. Someone did not like that and decided to implement their formatting, thebvproceeded to dramatically change my posts and updates. Also people kept giving me solutions to problems I never included in my question. The whole thing was ridiculous.