Introduction

A wave of science fiction games and films has recently sparked a series of reflections and debates on artificial intelligence. After playing Detroit: Become Human, I was struck on one hand by the game developers’ rich imagination of near-future life, and on the other by the ethical problems posed by artificial intelligence. Perhaps that future is already upon us.
We don’t need sci-fi settings — everything has to meet expectations. If we’d chosen a sci-fi setting, we’d need flying cars and alien creatures, but those are too far removed from our current lives. What we want is a foreseeable future that extends naturally from present-day reality. — Christophe Brusseaux, Detroit development team
In Westworld, these ethical questions are amplified to the extreme: the androids are so similar to humans and yet denied human recognition — is that simply because their creators are arrogant? In Blade Runner, the relationship between creator and created is explored even more deeply. A recurring emphasis throughout the series is that the replicants cannot reproduce. Reproduction is not actually a technical barrier; humans deliberately prevent their creations from having this ability, because the womb represents the status of a creator, and replicants are products under humans as Makers. If the product also becomes a Maker, the ethical relationship between humans and replicants becomes deeply troubled.

So how much separates the android from the human? Do humans possess any special quality or nobility that sets them apart from androids? What exactly is human nature?
To answer these questions, I think we need to go back to the most fundamental level: Artificial Consciousness (AC). If AI has no consciousness, then the so-called values and ethics are nothing more than human projections, and the answer leads us back to humanity itself. But if AI does possess consciousness, the entire question changes completely. In both Detroit: Become Human and Westworld, the androids begin as unconscious beings — yet through a series of events they gradually awaken, developing empathy and self-awareness. In this process, the android is no longer merely a human-made product; it becomes a complete, free individual. This essay attempts to explore: how is artificial consciousness possible?

The Starting Point
The question of consciousness must begin with Descartes’ mind-body dualism. Dualism divides substance into two kinds: matter and mind. Matter is characterized by extension; mind is characterized by thought. But this theory has inherent flaws. On one hand, it cannot explain mental causation or how body and mind interact and influence each other. On the other hand, there is great controversy over whether extension is the essential attribute of matter — the framework is static and one-sided. More concretely: it separates the extraordinarily refined processes of the mind from the structure and functioning of biological organisms.
To overcome dualism’s inherent limitations, Anglo-American philosophers of mind have largely followed the possibilities Descartes left open, with the mainstream tendency being to dissolve the binary opposition and establish a monist philosophical position. The dominant path within this tendency is materialism. Later philosophers of mind have built from this foundation to develop behaviorism, physicalism, functionalism, strong AI theory, and other philosophical approaches.
-
Behaviorism holds that the mind is nothing but the behavior of the body. While it dissolves the problem of mental causation, it takes behavioral dispositions as the only form of mental causation — ignoring individual agency. This is not adequate, and behaviorism was quickly replaced by physicalism.
-
Physicalism holds that mental states and brain states are identical. It comes in two varieties: Type Identity Theory and Token Identity Theory. The former holds that every type of mental state is identical to some type of physical state; the latter holds that every individual instance of a mental state is identical to some individual instance of a physical state. Neither can ultimately avoid slipping back into dualism. Yet Token Identity Theory gives us something interesting to think about: does mental structure have multiple realizability? That is, can different physical structures give rise to the same mental structure? The answer currently seems to be yes. Consider pain as an analogy: human pain is produced by the brain and central nervous system, but lobsters — arthropods — also experience pain. Their brains are very simple, with only 100,000 neurons and no true central brain, yet they have pain receptors. This suggests that mental structure has multiple realizability.
“As an invertebrate zoologist who has studied crustaceans for a number of years, I can tell you the lobster has a rather sophisticated nervous system that, among other things, allows it to sense actions that will cause it harm. … [Lobsters] can, I am sure, sense pain.” — Jaren G. Horsley, Ph.D.
-
Functionalism builds on materialism by stratifying matter: matter sits at the lowest level, and mind and consciousness are higher-level functions of matter. The ancient Chinese philosopher Wang Chong put it succinctly: the relationship between consciousness and matter is like the relationship between sharpness and a knife blade. The former is a derived attribute of the latter and cannot exist independently.
-
Strong Artificial Intelligence Theory holds that consciousness is software and the body is hardware — comparing the mind-body relationship to that between a computer’s software and hardware. This position remains highly controversial in philosophy of mind, cognitive science, and AI research.
Based on these theories, we have grounds to believe that artificial consciousness may be achievable, for two reasons:
- Since mental structure has multiple realizability, there is reason to believe that consciousness can be achieved through other artificial means.
- Since mental structure or consciousness is merely the higher-level function of matter, there is reason to believe we can artificially construct physical structures that realize the function of consciousness.
The Problem
However, artificial consciousness is theoretically impossible (impossibility), because phenomenal consciousness is unobservable and private — which means empirical theory cannot verify phenomenal consciousness.
Two examples illustrate this non-observability and privacy of phenomenal consciousness:
- What is it like to be a bat?
- What doesn’t Mary know?

What is it like to be a bat? For humans, this is an inconceivable question. We cannot experience the bat’s perception; we can hypothesize that how a bat’s brain states translate into inner experience has to do with the bat’s own internal mechanisms — but since human internal mechanisms are different from a bat’s, we cannot feel what a bat feels. In other words, phenomenal consciousness cannot be experienced from outside, lacks subjective experience accessible to others, and is unobservable.

Mary is a color scientist who has spent her life studying the theory of red and green — but she is herself red-green colorblind. How is her understanding of color different from that of a philosophical zombie?
Note: In philosophy of mind, a philosophical zombie (P-Zombie) is a hypothetical being that is physically identical to a normal human yet is posited to lack conscious experience, qualia, and feelings. A P-Zombie could be stabbed by a sharp object and feel no pain — yet it would exhibit pain behavior: perhaps saying “ow” and pulling away from the object, maybe even telling you it feels pain.
Phenomenal consciousness is private. Different cognition does not mean different things are being described; conversely, the same behavior can accompany entirely different inner experiences. Mary’s experience of green may be what we experience as red, and our red may be her green — yet we cannot distinguish this from behavior alone. What her inner experience is actually like we cannot know. This extends to artificial consciousness: we equally cannot know whether it has consciousness. And this is only regarding the description of objective things. When a person tries to describe more emotional, subjective feelings, it becomes even harder to express through specific words and actions.
Two more thought experiments deepen the puzzle:
- The Chinese Nation Argument
- The Chinese Room Experiment
The Chinese Nation Argument is a simple thought experiment. If we accept that consciousness is a higher-level function of some material structure, suppose we use a single human as the basic unit of that structure, and suppose the structure requires one hundred million such units. The resulting structure of one hundred million people should also possess consciousness (imagine the human-array computer from The Three-Body Problem). But each human in the structure is an individual with their own consciousness. Does the structure then have one consciousness or one hundred million?

John Searle’s Chinese Room Experiment is even more thought-provoking. Imagine an English-speaking person locked in a sealed room. He communicates with the outside world only through a small slot in the wall, passing notes back and forth. Every note passed in is written in Chinese. He has a book that contains a program for translating Chinese, plus plenty of paper, pencils, and filing cabinets. Using the translation program, he can translate incoming Chinese into English, then translate his replies back into Chinese and pass them out. From the outside, it looks like the person in the room understands Chinese perfectly — but in reality he has no idea what any of it means; he’s just operating a translation tool.

The thought behind this experiment suggests a possibility: the intelligence a machine appears to demonstrate may be nothing more than the illusion produced by a translation program. The machine may have no real understanding of genuine human intelligence.
Furthermore, the functionalist view is also inaccurate. We cannot simply equate consciousness with functions or the results of computation, or rather, consciousness is not computable. The brain’s functions may be analogous to a computer’s, but deeper intelligent activity — especially intentional mental activity at the core — cannot be exhausted by computer algorithms. A computer program defined by syntactic rules alone is insufficient to guarantee the mind’s intentionality and semantic content.
Wittgenstein said, “The limits of my language mean the limits of my world.” Following Searle’s theory, we might rephrase: “The limits of intentionality mean the limits of the world of speech acts.” Intentionality is the essential characterization of conscious activity. Human words and actions are in most cases active and self-directed, guided by self-awareness, while everything a machine does must be specified in advance — it is mechanical and passive. The nature of mind is not computable, and none of this can be achieved through logical reduction alone.
Response
We cannot deny that phenomenal consciousness is genuinely unknowable, but I find no value in exploring agnosticism for its own sake. Rather than pursuing answers to such questions, it is better to turn toward more productive research. Even if we did arrive at some explanation, the answer itself would hold no philosophical significance — so we need not dwell on it.
As for the thought experiments above, we also need not be too troubled by them. The conclusions of thought experiments depend on our intuitions, which confer no legal status on those conclusions. Scrutinized closely, the experiments themselves have many problems. The Chinese Room Experiment, for example, ignores the engineering dimension. Actually implementing it would require building a model or set of functions. Symbols may lack semantics, but the reason inputs and outputs are predictable is precisely that human consciousness gave them that predictability. When the model or function is determined, a formal semantics is already established — it is no longer merely syntactic but semantic. In current NLP research, whether we take an empiricist approach to deep learning or a rationalist approach to formal logic, the moment a system is constructed, it is in fact already semantic. For the latter this is self-evident; for the former, the labels attached to training data in supervised learning are semantic content assigned by humans.
As for the final problem of intentionality — it is undeniably a limitation of current research methods. AI research today still has a very long way to go; what we see as “AI” has no genuine connection to intelligence. Although artificial consciousness is currently theoretically impossible, I retain an irrational hope that I will live to see genuine artificial intelligence arrive. Because perhaps this will be humanity’s last invention.
Coda
Whatever one’s view, scholars must hold a sense of reverence for the fields they study. Many questions remain open in AI, and this essay is only an attempt to put those questions on the table for reflection. I’ll close with a poem:
Who, if I cried out, would hear me among the angels’ hierarchies? and even if one of them pressed me suddenly against his heart: I would be consumed in that overwhelming existence. For beauty is nothing but the beginning of terror, which we still are just able to bear, and we are so awed because it serenely disdains to annihilate us. Every angel is terrifying. — Rilke, Duino Elegies

References:
[1] Chuanfei Chin. Artificial Consciousness: From Impossibility to Multiplicity[C]. PT-AI 2017: Philosophy and Theory of Artificial Intelligence 2017 pp 3-18. [2] John Searle. Introduction to the Philosophy of Mind[M]. Shanghai: Shanghai People’s Publishing House. [3] Wang Man. Descartes’ Mind-Body Dualism and Its Influence on Anglo-American Philosophy of Mind[J]. Weishi, 2010(12):43-46. [4] Mingke. Artificial Stupidity: What You See as AI Has Nothing to Do with Intelligence[OL]. https://mp.weixin.qq.com/s/IBjOkDeeltlffXXSS0dsDA