The technology sector is no longer asking what artificial intelligence can do for us. It is asking what AI can do for itself.
That shift is already underway. OpenAI has begun describing models as “intern-level” research assistants, capable of contributing to discovery. Anthropic reports that a substantial portion of its code is now AI-generated. Across Silicon Valley, the focus has turned toward machines that can build better versions of themselves.
To some, this signals unprecedented efficiency. To others, it cautions loss of control. Both interpretations are incomplete. The real shift is not toward intelligence, but toward recursion as AI reconstructs its ecosystem.
OPINION: AI ISN’T BECOMING SENTIENT — IT’S BECOMING YOU
We are not simply building AI that improves itself; we are building AI that increasingly learns, validates, and evolves through other AI systems. In that transition, something far more consequential begins to take shape: architectural hallucination, at scale.
AI is trained on the internet, which is rapidly becoming saturated with AI-created content. This creates a new condition: AI is no longer learning from humans and the world as it is. It is learning from a version of the world that it is increasingly generating itself.
Architectural hallucination is not a collection of isolated errors, but systems built on layered approximations of reality. Each iteration fills in gaps, resolves uncertainty, smooths complexity, and generates expediency through prediction. That prediction does not distinguish between what is true and what is merely plausible.
When AI begins writing its own code within that environment, those approximations become embedded assumptions. They are refined, reinforced, and ultimately integrated into the structure of future systems. This is not self-improvement; it is self-reinforcement.
The illusion of ‘research taste’
Within AI research, there is a concept known as “research taste” — the ability to recognize not just what works, but what is valid. As acknowledged within Google DeepMind, systems can optimize processes, but they have nothing to optimize for without human direction. AI can refine, accelerate, and generate; what it cannot do is independently determine whether the direction it is optimizing reflects reality or simply reflects its own internal “logic.”
Researchers have long observed this tendency in “specification gaming,” where systems optimize for the metric rather than the mission. Placed inside a recursive system, that tendency compounds. The system does not question its assumptions; it scales them.
From recursion to reinforcement
We are already seeing early indicators of this shift. Research from Stanford University and the University of Oxford has demonstrated model collapse, in which systems trained on synthetic data lose association with real-world complexity. This phenomenon is increasingly recognized as the degradation that occurs when synthetic outputs begin to replace original human-generated data, but the deeper shift is structural.
AI systems are increasingly learning from one another. As outputs become inputs, interpretations become training data, and synthetic information becomes the environment. This is no longer a feedback loop; it is an emerging ecosystem. And, within that ecosystem, confirmation bias is not a flaw but reinforcement. A hallucination becomes a reference; a reference becomes a standard; a standard becomes infrastructure.
The lurker experience
At the same time, the human role is subtly shifting. AI systems are beginning to interact with one another, generating, evaluating, and refining outputs with minimal human intervention. They exchange data, coordinate behavior, and refine outputs collectively. Experimental platforms already explore environments where AI agents exchange data and coordinate behavior while humans observe.
The trajectory is unmistakable. We are moving from participants in the system to observers of it. This is referred to as the lurker experience. We retain visibility, but not authorship. We witness outputs, but do not shape the processes that generate them. And, as systems become more complex, more recursive, and more internally validated, even visibility begins to erode. The risk is no longer what AI is doing; it is what we are no longer able to interpret and control meaningfully.
Behavioral diplomacy
The debate surrounding AI continues to frame the future as a choice between acceleration and restraint. That framing misses the point. The challenge is not whether AI advances — it will. The challenge is whether its behavior remains aligned with ethical human judgment as it accelerates. Since it is not sentient, it cannot originate values; it can only reflect and amplify the ones it inherits. And as those inputs increasingly originate from other AI systems, it begins to inherit not just information but also the accumulated distortions of its own ecosystem.
What is required is behavioral diplomacy — the deliberate alignment of machine behavior with human values, institutional accountability, and real-world grounding, ensuring that systems remain accountable to human-defined standards rather than to recursively generated logic. The emerging risk is not capability alone; it is autonomy without auditability.
The architecture
OPINION: YOUR AI SYCOPHANT WILL SEE YOU NOW
Despite appearances, AI is not becoming sentient; it is becoming self-referential. It reflects what we create, amplifies it, and increasingly uses that reflection as the foundation for its own evolution. If that reflection becomes dominated by its own output, then what that distortion builds next will not be grounded in reality; it will be grounded in itself. That is the moment where architecture becomes illusion. The more we rely on AI, without oversight or validation, the greater our exposure is to making critical decisions based on hallucinations.
In a recursive ecosystem, human values are not replaced; they are rewritten. Once embedded, they no longer require truth, only consistency.
Jacqueline Cartier is a corporate and legislative strategist focused on communications, crisis leadership, public trust, and emerging technologies that shape human behavior and decision-making. Follow her on LinkedIn.
