This issue is a record of and reflection on November and December 2024.

AI Has No Capacity for Experience

MoFlow launched on the App Store last night — New Year’s Eve 2024. Three months of intensity have produced something tangible. And so, at last, I have time to write this issue.

The question people have asked me most often while building MoFlow: “Can AI actually help people heal emotionally?” This piece is my attempt at a simple answer.

Note: “AI” in this piece refers to LLMs.

Part 1: What Separates AI from Humans — AI Has No Capacity for Experience

When people ask about MoFlow, the conversation inevitably turns to the difference between an AI counselor and a human one. We say AI can’t communicate “face to face,” can’t “interact with” another person in a physical, shared reality — the subtle shifts in tone, the meaning in silence, the emotions that never get put into words. These are distinctly human experiences, born from the full complexity of feeling.

Human cognition isn’t just information processing. It’s experience, woven from the intersection of sensation and emotion. We perceive reality through the body and give meaning to things through feeling. This mode of engagement gives us a unique way of understanding — one that allows us to make judgments and decisions in complex situations.

If experience is our most direct way of interacting with the world — sensory, emotional, an immediate state of being — then understanding is what happens when we reflect on, analyze, and integrate those experiences. It’s a rational activity.

AI is impressive at processing information and generating text, but it lacks this experiential dimension. It can handle enormous amounts of data and produce plausible output, but it cannot experience what that data represents in the real world. It does not have the capacity for direct experience that humans do.

Experience is more than perceiving information. It includes emotion, consciousness, and subjectivity — the heart of what it means to be human. Experience is the key that links inner consciousness with outer reality, individual with society (Dilthey).

And understanding, it’s often argued, is inseparable from experience. We come to a deep understanding of the world through lived events, emotional turmoil, and interaction with others. AI can process vast data and extract patterns from it, but it cannot truly “experience” what that data represents. It cannot meet the temporal and practical requirements of experience. By contrast, AI’s “understanding” is algorithmically simulated — not understanding in any genuine sense.

Part 2: The Infinite of Concept and the Limits of Language

Next, the problem of language in LLMs.

In a 2020 piece, On “Game Aesthetics,” Starting from Sky: Children of the Light, I wrote:

Because of the inherent limitations of human spoken and written language — and because the properties of things are infinitely rich — there can be no perfectly precise expression. And since emotion and things are constantly changing and developing, all verbal expression can only ever be an exhausted pursuit of a moving target.

At the Tower of Babel, God fractured language and humanity shattered with it. But the meaning of language isn’t determined by God — it’s determined by those who use it, and language isn’t the only medium of communication. Countless other behaviors can close the distance between a person and a concept. But only close — the concept itself is infinite. Siddhartha makes the same point: language cannot fully transmit truth or wisdom.

In logic, a proposition places an object under a concept. We can treat any object as having properties and individuality. For example:

Fa: Confucius is a philosopher

Here a is an individual constant representing Confucius, and F is a predicate saying that this individual is a philosopher. What we notice is that concrete individuals resist full grasp — in any form of artistic expression, what we capture is only certain characteristics of a particular individual. The concept we hold in mind is formed by the fusion of countless such perspectives. The higher our vantage point, the broader our historical and cultural horizon, and the more accurately we can assess the significance of everything within that view — large and small, near and far.

Beyond this, translating subjective experience into objective language is itself inherently problematic. An observer’s subjective feeling cannot be fully shared — because no one else can become the observer, or have an experience identical to the original one’s. As Wittgenstein put it: “The limits of my language mean the limits of my world.”

AI’s text-based processing and expression therefore necessarily makes understanding and empathy difficult.

Part 3: The Cold Reflection of AI Companionship

Earlier we established that the authenticity of emotion comes from experience. Human emotional complexity lies not just in how emotion is expressed, but in its roots — in personal history, cultural context, biological instinct. AI lacks the capacity for lived experience, and that absence means AI’s emotional responses are likely superficial and shallow. AI’s “understanding” of human emotion may be more like a mirror’s reflection than any form of genuine resonance. A painting can imitate nature, but can never be nature itself.

Understanding might still be logical, computational — but resonance requires an inner experience and an emotional connection.

This is not understanding. It is not resonance. It is like a reflection in a cold, still pool: a perfect, icy image of the world’s surface.

In 2019 I wrote a piece called How Could Artificial Consciousness Be Possible?, which explored whether AI could have genuine self-consciousness. I used examples like the Chinese Room experiment and philosophical zombies to argue that humans possess a unique capacity for subjective experience — private, unobservable from the outside. What we can say with confidence today is that current LLMs lack genuine inner experience.

Emotion is at the core of human experience. It isn’t just a response to external stimuli — it’s a genuine expression of our inner world. Yet as emotion-simulating technology becomes ubiquitous, we risk a kind of emotional dilution: complex, deep experience flattened into replicable patterns — which is precisely the trap that most AI companion apps fall into.

The particularity of emotion comes from individual life history and cultural context. Each person’s emotions reflect their own life story, their own way of engaging with the world. We need to honor the uniqueness of individual experience rather than reducing it to universal patterns. We can write backstories into an agent’s prompt, give it a relationship history and a personality and hobbies and life experiences — but the AI hasn’t lived any of that. Everything becomes simulation. Its emotional feedback becomes performative, formulaic. Users can spin up multiple agents in a companion app, chat with each of them, and see how each persona handles a given emotional situation. Even if things feel formulaic, no big deal — get bored with one, create the next.

What we actually need is to help users cultivate a critical awareness — to be able to distinguish between emotions that are genuinely their own and those steered by external technology. We need to protect the authenticity of human emotion, and that requires ongoing self-reflection and self-understanding. Only through genuine inner exploration can we truly understand where our feelings come from — and discover their authentic value in that process.

In an age of simulated emotion, what we need is not simply companionship products — we need a deep and genuine respect for the nature of human emotion. Only then can we, in this complicated world, preserve the uniqueness and authenticity of what we feel.

Part 4: Meaning Emerges from Practice

If AI’s emotional understanding is simulation, does that mean any interaction between humans and AI loses its deeper meaning?

In human relationships, meaning often emerges from genuine emotional exchange, resonance, and understanding.

But meaning doesn’t come from emotional authenticity alone. Meaning can also arise from function, from usefulness, from outcomes. When AI helps us solve complex problems or improves our lives, that interaction itself is meaningful — even if the emotional understanding is simulated.

It’s also worth reflecting on human emotion itself. Are our emotions always authentic? Or are they also shaped — “programmed,” in a sense — by social, cultural, and biological forces? If we accept that human emotion is sometimes itself a complex kind of simulation, does AI’s simulated emotion seem quite so different? In my earlier piece, How Could Artificial Consciousness Be Possible?, I wrote:

The Chinese Room argument neglects the engineering dimension. A real implementation would require building a model or function — and even if symbols lack inherent semantics, the fact that input and output are predictable means that human consciousness has already shaped them. When the model or function is defined, formal semantics is already built in. This makes it semantic, not merely syntactic. In NLP research today, whether the empiricist approach of building deep learning models or the rationalist approach of formal logic — the moment the system is constituted, it is already semantic. For the latter, obviously. For the former, the labels used in supervised learning carry semantic content explicitly assigned by humans.

So what we call “meaning” does not reside in experience, or even in understanding — it depends more on how we choose to see and use these interactions. AI’s emotional simulation can serve as a tool: helping us understand ourselves better, facilitating emotional communication between people, and in some circumstances, providing a form of emotional support. The meaning of human-AI interaction may not lie in the authenticity of emotion, but in what value and purpose we assign to those interactions. In exploring AI, we are both observing and creating, being observed and being redefined. This two-way dynamic may be the necessary path toward truth.

Part 5: The Answer MoFlow Is Pursuing

True understanding and wisdom are not just accumulated information — they are deep insights obtained through experience, reflection, and emotion. This is uniquely human, and it’s the core value we should protect and cherish in the face of AI. It is also MoFlow’s guiding philosophy.

Every design decision in MoFlow centers the user. Features de-emphasize AI’s presence. We fully leverage AI as a tool while avoiding any role as emotional companion.

For example: after you write about something that happened to you in MoFlow, the app quietly extracts your thoughts and surfaces those with positive emotional valence. When you catch a glimpse of that externalized energy unexpectedly, you start to build more positive beliefs without even noticing. MoFlow encourages positive self-dialogue.

From the piece A Method for Rapid Mental Growth: Increasing Your Own Agency:

“Many people default to negative self-talk, such as:

  • Self-negation: I’m not good at this, so I should avoid it.
  • Self-doubt: This problem seems tricky — might it be beyond me?
  • Self-criticism: I acted so badly just now. How did I perform so poorly?

These seem harmless, but the brain has a particular trait: it believes information that is repeated. These apparently ordinary internal dialogues, through the brain’s repetition and consolidation, gradually form fixed patterns.

Over time, the brain starts to believe them — lowering our self-assessment and causing us to actually become what we’ve been telling ourselves. This severely constrains our agency; when difficulty arrives, we hesitate, hold back, and find it hard to act effectively.

So the first step toward change is converting negative dialogue into positive self-talk:

  • I’m not good at this, so I should avoid it → I have another chance to gain experience.
  • This problem seems tricky — might it be beyond me? → Am I becoming stronger? Let me test myself with this problem.
  • I acted so badly just now. How did I perform so poorly? → I’ve already improved compared to before. Maybe next time will be even better.

One important note: many books teach ‘self-suggestion’ — telling yourself ‘I’m great,’ ‘I’m strong,’ ‘I’m capable.’ This is wrong.

Research shows that vague, imprecise self-suggestion, or suggestion the brain doesn’t believe, is not just ineffective — it backfires. It highlights the problem and makes something that wasn’t so serious feel more serious.”

When you’re facing a real difficulty, MoFlow will say something like: “You’ve solved similar problems before. You have the transferable experience, and you have what it takes to handle this.” Or: “Even getting it wrong is fine — it enriches your life and becomes new experience.”

Today’s AI can analyze large datasets to reveal patterns in human behavior, helping us better understand ourselves and society. But ultimately, genuine understanding still flows back to individual experience and personal reflection.

AI’s understanding is instrumental, not existential. It can help us better understand certain aspects of the human condition, but it cannot replace the deep insight we gain through lived experience and inner reflection. This is why MoFlow’s AI design is guiding and evocative — encouraging self-dialogue, self-growth, and self-care.

Care, attention, and love — these are gifts we receive, and gifts we can give to others. They only come alive in a life of generosity.

We study theory not to show off by citing books, but to change the world through what we do. Behind every act of practice are the forces of many souls, and those efforts ultimately land in the world — bringing something good to others. That is the meaning of practice we’re after.

“Can AI actually help people heal emotionally?”

“Yes. Absolutely.”

🎬 Books, Films, and More

What I’ve been reading, watching, and playing this period:

  • Finished: Biography | Why Fish Don’t Exist | ★★★★★
  • Finished: Psychology | Nothing: The Optimal State | ★★★☆☆
  • Finished: Psychology | Don’t Believe Everything You Think | ★★★☆☆
  • Finished: Fiction | Six Lying Undergrads | ★★★★☆
  • Finished: Popular science | How Far Are the Stars? | ★★★★★
  • Finished: Korean drama | Anna | ★★★★★
  • Finished: Drama | White Night Breaks | ★★☆☆☆
  • Currently reading: Philosophy | Worldviews | ★★★★☆
  • Currently reading: Psychology | The Knowledge Illusion | ★★★☆☆
  • Currently reading: Philosophy | What We Live For | ★★★★☆