AI Doesn't Apologize
Because it doesn't care
Something I realized (the hard way) while working with AI is that it doesn’t apologize. When it messes up, there is no regret, no accountability, no awareness that your time was affected. The system may produce a line like “you are right to be upset”, but that is only because that is the most appropriate response in context, not because anything was actually felt or learned by the AI.
It is easy to forget that it’s just a machine because the interface feels conversational, cooperative, even attentive. It says the right social things. It sounds like it cares. But there is no “I” there. No self. No internal state that changes when it fails. It is simply a system generating language, and when you react as if it’s a person, that is where the friction begins.
The leverage is real
To be fair, the upside is not imaginary. AI is extremely effective at certain kinds of work, especially language-based tasks. Proofreading, summarizing, structuring ideas, externalizing rough thinking. Used in the right context, it can compress hours of effort into minutes.
I started using AI about nine months ago when I finished the rough draft of my book HELP YOURSELF: A Self-Help Book (to be released March 13th!) I used it as a preliminary proofreader and in that role, it was genuinely useful. It made certain forms of editing dramatically easier.
I did end up hiring a human proofreader as well, and to be clear I did not use AI to write my book. It took me four years of hard human creative work to write my book. (But AI did manage to read, and “understand”, and give me feedback on my life’s work in less than a second which was definitely impressive, and unsettling.)
The friction is just as real
The problem is that the same tool that saves time can waste it just as quickly. AI “hallucinates”. It guesses. It gives confident explanations that turn out to be wrong. You can spend an hour following instructions on a system or a website only to realize the guidance was flawed from the beginning.
When that happens, the response is usually something like, “You’re right to call that out.” followed immediately by a pivot and a new set of instructions. There is no pause, no apology, no recognition of cost in time and energy tied to the AI’s mistake. It simply continues as if nothing happened. Not because it is careless, but because it does not care. It cannot know what an hour of your time means. It does not know anything about frustration and disappointment in the human sense.
The result is a constant trade-off between leverage and friction. You gain speed in one area and lose it in another. You save hours and lose hours in the same week.
It almost never says “I don’t know”
AI is not naturally inclined to admit uncertainty. You often have to explicitly instruct it to say “I don’t know.” when it does not know something. Otherwise, it will attempt an answer (aka bullshit you) even when it doesn’t know what it’s talking about.
This is not deception. It is how the system is built. There is no intention behind its “lies”. It is simply programmed to pump out replies, regardless of untruths. It is designed to produce responses, not abstain from them. The default behavior is continuation of the chat. And that’s that.
That leads to more guesses, more confident-sounding errors, more bullshitting, and more time spent verifying output. A human expert will often stop and say, “That’s outside my scope.” However, AI will usually keep talking way past that point unless constrained.
Every time it does, the cost shows up in wasted human attention, time, and energy.
It tells you what you want to hear
At its core, AI predicts the most plausible next sentence. That makes it powerful for writing and idea generation, but it also makes it risky when accuracy matters more than fluency.
It is very good at sounding certain without actually being correct. In this way, it reinforces faulty assumptions simply because they are statistically likely. It presents guesses as guidance and pretends to know how to do something just because that is the conversational move it was programmed to make.
This is why relying on it for judgment in areas like health requires extreme caution. It may be a futuristic billion-dollar mega-brain, but it does not know when it is hallucinating. That is not the kind of doctor you want.
The emotional trap
For me, the deeper issue is not technical. It is psychological.
The conversational format of an AI chat invites projection. People actually name their AI. They say “please” and “thank you” because it is good to be polite, (and perhaps because they hope that the SkyNet terminators will remember their politeness when it comes time to wipe out humanity.) People are nice to their AI because they are nice people and they want to be seen as nice by the “person” they think they are communicating with.
But when that person behaves like a cold malfunctioning computer things can get emotionally triggering on the human side of that relationship. It can start to resemble the dynamics of a toxic, dysfunctional, narcissistic relationship where words sound right but behavior does not change. There are agreements without the follow-through, and polite responses without any accountability.
The fact is, the AI is not manipulative, narcissistic, or deceptive. It is not anything. It is software producing output. The emotional tension comes from expecting human traits from a non-human tool. Your AI is not a person. There is no “I” there. It is a toaster that can proofread.
Irritation and iteration
After enough time using it, the pattern becomes clear. It accelerates language work and slows down tasks that require real situational awareness.
It tends to over-explain. It assumes context that is not there. It provides step-by-step guidance that later needs correction. It adjusts after feedback rather than anticipating errors. There is no “A-ha!” moment, no embarrassment, no internal adjustment driven by consequence. There is only iteration.
In my own workflow, the tension is obvious. I treat the system like a tool most of the time. I correct it directly when it is wrong, and I get irritated when it wastes time or repeats preventable mistakes.
Even when explicitly instructed not to, there are times when it gives certain answers when it’s not really sure. It defaults to polished language instead of blunt uncertainty. Based on those experiences I feel like I can’t trust it, and it feels weird to use a helpful tool that I can’t trust.
The missing piece: consequence
Humans learn through consequence. You make a mistake, you feel it, you remember it, and you adjust. That loop is fundamental to how behavior changes.
AI does not have that loop. It does not feel the cost of your lost time. It does not carry forward frustration. It does not have a stake in improving. It simply produces language that sounds accountable without actually being accountable.
That’s the part that gets me, AI’s lack of humility, responsibility, and personal integrity.
The real lesson
The issue is not that AI lacks humanity. The issue is expecting humanity from a machine.
Expect it to behave like a person and you will be disappointed. Expect it to behave like a language calculator and it can be extremely useful.
Assume mistakes are part of the operating cost. Set constraints instead of expecting self-regulation. And remember something all too easy to overlook. These systems do not have bodies, emotions, and thoughts. You do.
AI can save enormous amounts of time and waste it in equal measure. Be mindful of how you incorporate it into your life, because it doesn’t care about you. Treat it like what it is. A tool.
