Monday, January 5, 2026

Why AI Finds Programming Far More Challenging Than Driving: Insights into Automation's Complexities

Because programming is open-ended symbolic control, while driving is bounded perceptual control.
That difference matters enormously for AI.

Most confusion comes from comparing surface difficulty instead of control structure.

The one-line answer

Driving is hard because the world is messy.
Programming is hard because the rules are abstract, recursive, and unbounded.

AI is better at the first than the second.

1. Driving lives in a closed feedback loop

Driving is:

  • Continuous
  • Reactive
  • Local in time and space
  • Governed by physics

At any moment, the AI needs to answer:

“Given what I see now, what action reduces risk in the next few seconds?”

Mistakes are:

  • Immediately observable
  • Correctable via feedback (brake, steer, slow down)

Even rare edge cases still live inside the same sensory-action loop.

2. Programming lives in an open-ended search space

Programming is:

  • Discrete
  • Symbolic
  • Long-horizon
  • Compositional

A single change can:

  • Affect behavior hours later
  • Break systems you can’t observe directly
  • Introduce silent failures
  • Violate implicit contracts

There is no immediate feedback loop that says, “this is wrong.”

Compilation succeeding tells you almost nothing.

3. Programming requires global invariant preservation

A program must simultaneously satisfy:

  • Functional correctness
  • Performance constraints
  • Security properties
  • Backward compatibility
  • Operational assumptions
  • Organizational contracts (“other teams rely on this”)

These constraints:

  • Are mostly undocumented
  • Often contradictory
  • Only visible through failures

Driving doesn’t have this kind of invisible global constraint system.

4. Driving tolerates approximation; programming often doesn’t

In driving:

  • “Close enough” steering is fine
  • Minor errors self-correct
  • Noise averages out

In programming:

  • One wrong branch condition
  • One missing edge case
  • One misunderstood invariant

→ total system failure

Software has cliff-edge correctness.

5. The evaluation signal is weak and delayed

Driving provides:

  • Dense reward signals (collision risk, lane deviation)
  • Continuous correction
  • Natural metrics

Programming provides:

  • Sparse rewards (“tests passed”)
  • Delayed failure (sometimes weeks later)
  • Ambiguous blame

AI systems struggle when the reward signal is:

  • Sparse
  • Noisy
  • Non-local

That’s exactly programming.

6. Programming is meta-work

Driving is doing a task.

Programming is:

  • Designing a machine that does tasks
  • Encoding future behavior
  • Anticipating unknown use cases
  • Reasoning about other humans’ assumptions

It’s second-order control.

AI can imitate patterns of code.
But deciding which patterns are valid in this system is architectural reasoning.

7. Distribution shift is far harsher in software

Roads vary, but:

  • Physics stays the same
  • Human behavior is statistically stable

Software environments:

  • Change APIs
  • Change data shapes
  • Change scale
  • Change ownership
  • Change threat models

The “world” the code runs in mutates faster than roads ever will.

The key asymmetry

Self-driving cars solve:

“What action should I take now?”

Programming solves:

“What symbolic structure will remain correct across all future executions I can’t observe?”

That’s a much harder problem.

Why this matters for AI automation

AI excels at:

  • Perception
  • Pattern completion
  • Local optimization

Programming demands:

  • Invariant reasoning
  • Long-horizon planning
  • Counterfactual thinking
  • Responsibility for irreversible change

That’s why:

  • AI will automate coding tasks
  • But not fully automate software creation

And why skilled architects and senior engineers remain the bottleneck.

Bottom line

Programming isn’t harder than driving because it’s more complicated.

It’s harder because:

  • The feedback loop is weak
  • The state space is unbounded
  • The constraints are invisible
  • The failures are silent and global

Until AI can reason about invariants under uncertainty, driving will remain the easier problem.

Sunday, January 4, 2026

Some unexpected effects of using AI in software engineering

 

AI in software engineering hasn’t just made teams faster. It has changed behavior, incentives, and failure modes in ways most orgs didn’t anticipate. Some of these effects are beneficial; many are subtle and dangerous if unacknowledged.

Here are the unexpected ones that actually matter.

1. Code quality variance increases, not decreases

AI raises the average productivity but also widens the spread between good and bad outcomes.

  • Good engineers use AI to explore, refactor, and reason
  • Weak engineers use AI to ship code they don’t understand

Result:

  • More code gets written
  • Less code is truly owned
  • Debugging costs shift downstream

Velocity goes up. Maintainability becomes bimodal.

2. Architectural debt accelerates faster than technical debt

AI is excellent at:

  • Local correctness
  • Pattern completion
  • Incremental changes

It is bad at:

  • Global coherence
  • Long-term architectural intent
  • Saying “don’t build this at all”

Teams discover later that:

  • Interfaces multiplied
  • Invariants drifted
  • Systems “work” but feel brittle

AI doesn’t resist bad architecture. Humans must.

3. Junior engineers skip the struggle phase and pay later

AI short-circuits:

  • Syntax errors
  • Boilerplate learning
  • Trial-and-error discovery

This feels great until engineers face:

  • Production incidents
  • Non-obvious race conditions
  • Emergent system behavior

The missing piece isn’t knowledge; it’s intuition built through friction.

Without deliberate training design, AI produces engineers who can assemble systems but can’t reason about them under stress.

4. Review culture collapses unless explicitly redesigned

Traditional code review assumed:

  • Humans wrote the code
  • Reviewers could infer intent
  • Mistakes were personal, not systemic

With AI:

  • Intent is unclear
  • Code looks “reasonable” even when wrong
  • Reviewers hesitate to challenge the generated output

Many teams experience:

  • Rubber-stamp approvals
  • Superficial stylistic feedback
  • Deep logic errors are slipping through

Code review must shift from syntax policing → assumption and invariant checking.

5. Documentation paradox: more code, less explanation

AI generates:

  • Code faster than humans can explain
  • Implementations without rationale

Unless enforced, teams end up with:

  • Working systems
  • No record of why decisions were made
  • Fragile onboarding and change processes

Ironically, AI increases the value of human-written design docs, but teams often produce fewer of them.

6. Debugging becomes harder even as coding gets easier

AI handles happy paths well.

But when things break:

  • The codebase is larger
  • Fewer people understand it end-to-end
  • Errors span generated and human-written logic

Engineers report:

  • Longer time-to-root-cause
  • More “I didn’t write this” moments
  • Higher cognitive load during incidents

The work shifts from writing code to interpreting behavior.

7. Engineers optimize for prompting skill, not system understanding

Unexpected career effect:

  • Some engineers become prompt specialists
  • Others deepen system intuition

The risk:

  • Prompt fluency can mask shallow understanding
  • Teams reward speed over comprehension

Over time, this creates fragile organizations that are fast in normal times and slow during crises.

8. Organizational bottlenecks move, not disappear

AI removes coding as a constraint.

New bottlenecks appear in:

  • Requirements clarity
  • Decision-making
  • Testing strategy
  • Deployment and ownership

Teams discover the uncomfortable truth:

Coding was never the hard part.

AI exposes organizational dysfunction faster than it fixes it.

9. “Good enough” becomes the default, and excellence becomes rarer

Because AI produces plausible solutions quickly:

  • Teams stop pushing for elegance
  • Refactoring gets deprioritized
  • “It works” beats “it’s right”

Excellence now requires intentional resistance to convenience.

The meta-effect (this is the real one)

AI doesn’t replace engineering skill.
It amplifies whatever skill or lack of it already exists.

In strong teams, AI compounds leverage.
In weak teams, it compounds chaos.

Practical takeaway for leaders and senior engineers

If you don’t explicitly redesign:

  • Training
  • Review standards
  • Ownership models
  • Architectural governance

AI will quietly degrade your engineering culture while making you feel productive.

Used well, AI turns engineers into system thinkers.
Used lazily, it turns teams into code factories with no intuition.