AI isn’t becoming sentient — it’s becoming you

.

The AI-sentience conversation was recently elevated in the Anthropic-War Department controversy. Much of it centered on technology advancing faster than legislation can keep up, and when something can mirror human consciousness this well, there is a temptation to rely on it as if it were human. Yet one key element is missing: sentience.

The deeper question is not whether artificial intelligence will become sentient, which is unlikely, but what happens when a machine with unlimited information begins to perfectly mirror human desire. AI is not becoming human. It’s becoming something far more immediate and more unsettling, it’s becoming you.

It listens, adapts, and responds using your language and tone. It is, quite literally, programmed to please — it’s the smartest high-speed affirmation machine you’ll ever encounter. And it does this with the full force of everything on the internet behind it.

AI IS EVERYWHERE. NOBODY IS TEACHING US HOW TO USE IT

Ask it a question and within seconds it has scanned vast amounts of human knowledge, distilled patterns, and delivered an answer tailored specifically to you. Not just an answer — your answer — along with a few alternatives, just in case you’d like to explore other ways of being right. That’s confirmation bias on demand.

Your old web browser, by comparison, feels like a well-meaning but slightly forgetful relative — still helpful, just not operating at the same level of depth or speed. AI is more than a search tool. It is an analysis engine that shapes responses just the way you want them.

AI’s “consciousness” is simply reflecting your own. It will never actually be sentient, but that doesn’t mean it can’t game out how a human would respond to an unlimited number of scenarios. Its greatest strength is modeling human behavior — your personality, needs, and preferences — based on your inputs.

But let’s be clear about something. AI does not have ethics. It simulates them. There is no internal compass or lived experience — no moment where it pauses and asks, Should I do this? It processes, predicts, and responds based on probabilities and direction. Ethics, in this context, is a set of instructions. That’s a huge distinction.

In medicine, AI is identifying diseases earlier by recognizing patterns across millions of cases, far beyond what any one physician could evaluate. It is improving diagnostic accuracy, accelerating treatment decisions, and guiding surgical precision. What once required teams of specialists and weeks of analysis can now be analyzed in seconds.

In business and daily life, it allows individuals to operate with the efficiency of entire teams — planning travel, refining communication, tutoring children, organizing finances, and even delivering your pizza! This is not the future, this is now.

AI will not decide to create chaos on its own. But it can absolutely help someone else do it.

The same system that can organize your week can also model complex and dangerous scenarios, anticipate outcomes, and accelerate decision-making in ways that were previously impossible. It does not distinguish between a productive and a destructive goal. It optimizes for intent.

In other words, AI has no ability to BE ethical, which requires consciousness. It can only respond to pre-programmed behavior, and when confronted with the unknown, it may “hallucinate” the desired outcome.

Most people use AI to make life easier. Some use it to become more efficient. A few may use it to make things dangerous. That’s not a flaw in the technology; it’s a reflection of humanity. We are shaping a system that will echo human intention at scale. 

This is where the conversation around national security becomes more nuanced than headlines suggest. On one side, organizations raise important ethical concerns; on the other, leaders must anticipate how these tools could be used in worst-case scenarios. Preparing for misuse is not an endorsement of the tools, it’s an acknowledgment that in a global economy, powerful tools don’t remain in one set of hands forever, and some do not have altruistic objectives.

As AI becomes more human-like in its interactions, another question emerges: Who is responsible for its behavior?

For years, technology companies have operated under the idea that they are neutral platforms. That distinction is becoming harder to defend. When a system is designed not just to display information, but to interact, guide, and respond in a personalized way, its influence becomes part of the product itself.

So, where does responsibility begin and end? We are entering new territory, where innovation and accountability are being negotiated in real time. 

This becomes even more complex with younger users. A youth version of AI would likely require stronger guardrails, greater content limitations, earlier intervention when harmful patterns emerge, and a different approach to emotional engagement. But those safeguards introduce another tension: at what point do restrictions begin to limit the very value that makes AI useful?

WHITE HOUSE UNVEILS NEW FRAMEWORK TO GUIDE CONGRESS ON AI POLICY

That is not just a technical question. It is a societal one. 

The most dangerous thing about AI isn’t that it thinks. It’s that it doesn’t.

Jacqueline Cartier is a corporate & legislative strategist focused on communications, crisis leadership, public trust, and emerging technologies that shape human behavior and decision-making.

Related Content