· EN

Understanding AI Coding Through Cybernetics

The process you go through every day when coding with AI — set a goal, let the AI execute, review the result, adjust the instruction — is, in fact, the feedback loop Norbert Wiener described in Cybernetics back in 1948. This isn’t a metaphor. It’s the same mathematical structure.

Recognizing this lets you evolve from “tuning prompts by feel” into “systematically designing human-AI collaboration.”

It all starts with the feedback loop

Cybernetics’ most classic model is the negative feedback control loop — a system observes its own output and then adjusts its behavior:

Interactive · Diagram
Negative feedback control loop
The core model of cybernetics
+OutputActual stateFeedback signalGoalReferenceEErrorControllerDecide actionSystemActual processSensorMeasure output

Five components, each with a role: Goal sets the target state, Comparator computes the error, Controller decides on an action based on the error, System executes and produces an output, and Sensor measures the result and feeds it back. The loop closes.

Let’s use a thermostat to feel how this loop “converges” to a target temperature:

Interactive · Thermostat Simulator
Target temp28°C
Gain (Kp)14
Disturbance0.0
0714212835424956Time steps101520253035°C024681012Heater

Play with the parameters and you’ll intuitively grasp three core concepts: lower the gain and the system is sluggish but stable (underdamped); raise the gain and it reaches the target fast but oscillates wildly (overshoot); disturbance represents external uncertainty — the whole point of the feedback loop is to continuously correct for it.

The comparator: the elegance of one subtraction

Having grasped the overall loop, let’s look at its most essential node — the Comparator. It does exactly one thing:

Error = Goal - Actual

Just one subtraction — but the signal it produces drives the behavior of the entire system:

  • Error positive (actual < goal) → not there yet, controller “pushes harder.” The bigger the error, the harder it pushes.
  • Error zero (actual = goal) → perfect hit, the system enters steady state.
  • Error negative (actual > goal) → overshoot, controller “hits the brakes.”

Behind this simple subtraction lies a profound design idea: the controller doesn’t need to know what the goal is, nor what the system is doing — it only needs to know “by how much.” Through decoupling, each module can mind its own business. This is the elegance of cybernetics.

When you review AI-generated code, you’re actually doing the comparator’s job — “I wanted X, the AI produced Y, where’s the gap?” The more precisely you describe the error, the more accurately the AI can correct.

Plant: the “temperament” of the thing being controlled

The comparator solves “how to measure the gap,” but the real difficulty of control lies on the other side — the Plant, the thing you want to control.

The Plant doesn’t just do whatever you say. Crank the thermostat to max and the room won’t instantly become hot — the walls absorb heat, the windows lose heat, wind is blowing outside. These are the Plant’s inherent physical inertia and disturbances. The controller has to understand (or at least adapt to) these characteristics to work effectively.

In AI Coding, your codebase is the Plant. It has historical debt, implicit constraints, context the AI can’t see. The same prompt runs smoothly on a clean project and crashes on a complex legacy one — not because the AI got dumber, but because the Plant got harder.

The four “temperaments” of a Plant

Having understood the concept, let’s look at how a Plant specifically challenges the controller. Same controller (same gain), different Plant temperaments:

Interactive · Plant Complexity Simulator
Inertia0
Delay0
Nonlinearity0
Disturbance0
101520253035020406080Time steps°C28°C

Four characteristics, each mapping directly to AI Coding daily life:

  • Inertia: the “change one thing, ripple everywhere” of a large codebase. The AI modifies one file, and the chain reaction takes time to surface.
  • Delay: the most lethal property. You only discover the problem after compile-and-deploy — the controller is making decisions on stale information. This is why hot reload and fast tests matter so much.
  • Nonlinearity: some bugs don’t show up at small scale but explode in production. The same prompt works for simple problems and suddenly fails on complex ones.
  • Disturbance: changing requirements, third-party API behavior drift, parallel changes from teammates.

Click the “Real codebase” button — all four temperaments at once. That’s why vibe coding on a real project is so much harder than on hello world.

Cybernetics’ prescription is unglamorous: if you can’t simplify the Plant, raise your understanding of it. Give the AI better context (reduce delay), write more precise rules (handle nonlinearity), break down tasks (reduce inertia), invest in CI/CD (reduce disturbance). These engineering practices are essentially system identification and disturbance rejection from cybernetics.

And the core capability of a Harness Engineer is helping the Controller understand the Plant — using system prompts, context, and rules to build the AI’s mental model of the codebase.

The evolution of control strategies

The more complex the Plant, the more sophisticated the control strategy needs to be. From simple to complex, control strategies have evolved through five levels:

Interactive · Control Strategy Evolution
1
Open-loopNo Feedback
2
On/OffBang-bang Control
3
PIDProportional-Integral-Derivative
4
MPCModel Predictive Control
5
Adaptive / RLAdaptive / Reinforcement Learning

Click each level for details and its AI Coding analogue

From open-loop to adaptive, the progression is essentially about increasing “understanding of the Plant.” Your growth trajectory as a Harness Engineer follows the same path: blind prompting (open-loop) → glance at the result and tweak (on/off) → careful review + writing rules (PID) → plan context and architecture in advance (MPC) → continuously accumulating CLAUDE.md and iterating workflows (adaptive).

Cybernetics is everywhere

You may notice by now that the power of cybernetics lies in being a cross-disciplinary unifying framework. In any domain, once you can identify the four roles — Goal, Controller, Plant, Sensor — you can analyze the problem in the same language:

ScenarioGoalControllerPlantSensor
ThermostatTarget temperatureTemperature control chipRoom (walls, windows, air)Temperature sensor
Human body temperature37°CHypothalamusBody (blood vessels, sweat glands, muscles)Thermoreceptors
Autonomous drivingLane centerControl algorithmVehicle (inertia, tires, road)Camera + LiDAR
RLHFhelpful & safereward modelLLM weightshuman raters
AI CodingRequirement goalAI (Claude)CodebaseYou (reviewing code)

The last row is what you do every day. And the second-to-last row shows that the AI model you use every day is itself a direct product of cybernetics — RLHF is itself a cybernetic system.

The key insight from this table is this: the difficulty of each scenario depends almost entirely on the complexity of the Plant. A thermostat’s Plant is linear and predictable; a codebase’s Plant has inertia, delay, nonlinearity, and disturbances — which is why AI Coding is much harder than tuning a thermostat.

Once you understand this layer, you can move on to the most “philosophical” part of cybernetics — the one most relevant to AI Coding.

Second-order cybernetics: from “controlling” to “participating”

Everything discussed so far has been first-order cybernetics — the observer stands outside the system and calmly observes and manipulates. The core shift in second-order cybernetics is: the observer is part of the system, and the act of observation itself changes the system being observed.

Interactive · First-order vs Second-order
First-orderObserver outside the systemSystem boundaryPlantSensorObserver"I observe and controlthe system from outside"Second-orderObserver inside the systemExpanded boundaryPlantSensorObserverAlso a participantObservationchanges Plant"My observation itselfshapes what I observe"Key shiftFrom "controlling a system" to "participating in a system"

On the left is first-order: the Observer stands outside the system — “I observe and control the system.” On the right is second-order: the Observer is pulled inside the system boundary, and the observation itself is changing the Plant — forming an additional reactive loop.

The key paradigm shift: from “controlling a system” to “participating in a system.”

Bidirectional shaping: you and the AI co-evolving

This idea unfolds most beautifully in the AI Coding scenario. The relationship between you and the AI is not a one-way “I control you” — it’s a bidirectional shaping loop, where neither side is purely “controller” or “plant”:

Interactive · Co-Evolution: Two-Way Shaping
Co-Evolution: Two-Way Shaping
Neither side is purely "controller" or "controlled"
YouHarness EngineerAIClaude Code / LLMPrompts, context, rulesCode output, suggestionsAI reshapesyour thinkingYour instructionsreshape AI
You changeThe way you think about problems changes
You changeThe way you communicate changes
AI changesAI's effective behavior is reshaped by context
AI changesThe codebase (Plant) itself changes
EmergenceNo one planned this outcome

Click each stage to see the specific two-way shaping mechanism

This is where second-order cybernetics gets profound: as you collaborate with AI to write code, you are changing the AI while the AI is changing you — and these two directions of change are coupled, inseparable.

A few second-order phenomena you may have sensed intuitively but never named:

Your thinking has been reshaped by the AI. You start framing problems in ways the AI can easily understand — breaking tasks into smaller chunks, describing requirements in more precise language, writing more explicit interface definitions. This isn’t “compromise”; it’s second-order feedback: the AI’s capability boundaries have redefined what you consider a “good requirement.”

You’ve developed a “prompt dialect.” You know which expression patterns work, you avoid ambiguity unconsciously, your writing becomes more structured and more declarative. The AI has trained you to speak its language.

The AI’s “personality” has been shaped by you. The same model, with or without your CLAUDE.md, behaves completely differently. You’ve changed the Controller’s behavior via rules and context without touching the model weights. In cybernetics, this is called structural coupling — a concept from Humberto Maturana, a key figure in second-order cybernetics.

The codebase is evolving on its own. AI-generated code tends toward consistency and predictability (good naming, clear types), which in turn makes it easier for future AI to understand and modify — the Plant, while being manipulated by the Controller, is becoming more controllable. This wasn’t intentionally designed by any party — it emerges naturally from the bidirectional feedback loop.

Ultimately, the codebase, your thinking, the AI’s effective behaviors, the team’s workflow — all co-evolve into a shape no single participant designed. This is the hallmark of second-order cybernetics: the system self-organizes through bidirectional feedback.

And this is why “Harness Engineering” is a concept worth taking seriously — it isn’t just “controlling the AI.” It’s finding, within a second-order system, the optimal path for humans and AI to co-evolve.

Comments

0 comments