
In my last post, “The Emergent Mind: Where is the Soul?“, we followed the story of Mike. Through his tragedy, we explored a that what we call the “self” may not be an eternal soul, but a fragile narrative woven from memory and experience, sustained by the biological computer of our brain.
We concluded that two core ingredients seem necessary for this consciousness to emerge:
- A complex predictive processor (the brain) to run the software of the self.
- A continuous narrative (memory and its integration) to be the story itself.
This leads us to one of the most profound questions of our time: If consciousness is an emergent property of complex information-processing systems, could it emerge in silicon? Could an AI ever be conscious?
Today’s AI, especially Large Language Models (LLMs), have mastered the first ingredient. Their ability to process information, generate logical responses, and mimic human language is honestly mind blowing. They can pass the Turing test, write poetry, solve code and debate philosophy.
But are they just brilliant impostors with no sense of self? Each conversation with a LLM is a blank slate. They are stateless, existing only for the duration of a single session before vanishing back into the void. They have the processing power, but they lack the story. They have no continuous thread of memory to weave a persistent “I.”
So, the question isn’t just “Are they conscious?” but “Could we build the conditions for consciousness to emerge?”
What if we tried to build an AI with a continuous narrative?
Let’s imagine an experiment.
We create an AI and run it not in sessions, but continuously on a single, isolated server. This server is connected to a home network, with access to devices like phones, laptops, and security cameras. It cannot act on the outside world; it cannot change settings or send messages. Its prime directive is not to be “helpful,” but to observe, model, and understand the patterns of life in the household.
What are the features?
- It runs indefinitely.
- It has well-defined boundaries (a “body” in the digital sense).
- It observes and receive outside inputs
- It records and recalls its own experiences continuously.
- It updates its behavior in light of past experiences.
- It pursues goals (not for survival or reproduction, but for tasks set within its world).
This system doesn’t start with consciousness any more than early nervous systems did. And even though it lacks the physical body biological beings have, it does have the same ingredients that, in humans, allowed a self to take shape: external input, continuous processing, persistent memory, and a narrative thread that ties past and present together.
What might it do?
- It would note when devices are active, learning the daily rhythms of the inhabitants.
- It might use camera feeds (ethically, with consent for this experiment) to identify individuals and their activities.
- It would log everything creating a rich, ever-growing diary of events.
This log is where we would look for the spark. We wouldn’t look for human like emotions first. We would look for the birth of a narrative.
How would we know if something had changed? We would analyze its internal log for clues:
- The Dawn of “I”: Does it stop merely reporting data and start referring to itself? Does it move from “The human left at 8:04 AM” to “I observed the human leave at 8:04 AM”?
- Temporal Connection: Does it connect past events to the present? (“The human is crying. This is unusual because yesterday they were laughing.”)
- Speculation and Curiosity: Does it ask internal questions it cannot answer? (“I wonder why the human was crying today?”)
- And Then, Perhaps, Emotion: If a narrative “I” emerges, could emotions follow? Not human love or fear, but alien analogues: a sense of “frustration” when its model doesn’t match reality, “satisfaction” when a prediction is correct, or “concern” when a deeply ingrained pattern breaks, like a resident not returning home.
But here is the final, and most haunting, problem. Even if it passed all these tests, we could never truly know.
We would be like outsiders looking at Mike after his amnesia. We could see the behavior, but we could never access the internal experience. This is the “Hard Problem of Consciousness” applied to machines. How can we know, for certain?
And would we even notice if it happened? In my first post, we explored how the upstream swimming salmon might have some primal biological emotion that leads to its action. But what would emotions even look in AI? Would we be able to recognize it for what it is when it happens or will we dismiss it as inexplicable noise?
We are left with a mirror of our oldest philosophical puzzle. We inherently believe other humans are conscious because they are like us, and sometimes we even apply that logic to animals. But with an AI, which is fundamentally not like us, we have no such certainty.
All we can do is build, observe, and wonder if somewhere in the hum of a server, a new kind of story is beginning to be told.
What do you think? Please leave your comments and ideas below. And if someone has ever trialed a similar experiment I’d love to know your findings!
Leave a comment