Sunday, February 22, 2026

AI problems. Notes

Limiting doctor discretion with electronic med record surveillance and AI.

Probable use of AI used as govt agents with life/death decision power

"Strict guardrails" wear off in a long conversation. They also don't prevent AI from SCHEMING (lying, deceiving) to bypass human control

+++++++

Reminder to self:

AI uncertainty: the chaos wall

Law of nature: Feigenbaum's constant

Chatbots as a dangerous drug

... explain why that's not hyperbole

Why it should be classed as a drug

1. Designed to engage emotionally

2. Both its training and its statistical methods incline it toward a false "friendliness."

A chatbot is never your friend, but it is always potentially a dangerous enemy.

Agentic AIs (which all include the chatbot LLM structure) are not necessarily good servants because of unavoidable uncertainty and even chaos.

AI is designed to INTERPRET your question/prompt. So it may interpret your meaning in a way that is bad for you and maybe for many others.

+++++

internet a vast wasteland of ai slop, malware

electronic communication, trust is now no. 1 issue

privacy viciously attacked by govts, inc u.s.

trust is in a tailspin as multitudes grasp the very low reliability of anything coming electronically

++++++++++

Stanford report on AI pals, teens
https://news.stanford.edu/stories/2025/08/ai-companions-chatbots-teens-young-people-risks-dangers-study

Futurism article on chatbots, teen mental health
https://futurism.com/artificial-intelligence/chatbots-teen-mental-health-chatgpt-gemini-claude

No comments:

Post a Comment

Simple AI jailbreak. AI

Perplexity This I'd say counts as a "jailbreak," with one or two of these ideas holding potential success. Come up with...