Where Does Your Design Space Live?

Apr 15, 2026

Two people sit in a coffee shop arguing about AI.

One just built their first website. They've never written a line of code, but their site works exactly how they imagined. They feel like a programmer now.

The other has been shipping iOS apps for fifteen years. They're frustrated because AI keeps hallucinating APIs, breaking architectural patterns, and producing code that looks right but fails in subtle ways. They feel like AI just slows them down.

Both are telling the truth. And neither understands why the other feels so strongly.

§ § §

I am a software engineer, and that term doesn't mean what it used to.

Before, it meant you were a systems thinker who worked with and near code. Now it means different things depending on who you ask. You might be a software architect treating AI as your engineering team, auditing code and piecing modules together. You might be a product manager who uses code as a means to shape experience. You might be someone who has never read a line of code and doesn't need to, because your website works and that's what matters.

All of these people call themselves software engineers now. And they're all having completely different experiences with AI.

The internet is full of hot takes about this. The general feeling is that there's no reason to look at code so much anymore.

I think this is a fool's opinion.

A more accurate take: the amount of code you need to write, audit, and own is directly related to how much surface area of your design space lives in code.

Every project has a design space. It's the set of decisions that determine whether the thing you're making is good or bad. For some projects, that space is entirely visual. For others, it lives in systems and rules. For others, it's narrative or emotional. The surface area of your design space is how much of that space you need to actively occupy to make the thing work.

Consider three people.

The static website builder is working in a design space that is entirely visual. Fonts, colors, layout, imagery. Whether the CSS is clean or a disaster, the user sees the same thing. The design space is the screen, not the codebase. AI can do most of the work here because the code isn't where the decisions that matter get made.

The iOS engineer building a standard business app is working in a design space that lives partially in code. Navigation, state management, data persistence, App Store compliance, but much of this space has been solved by tooling, frameworks, and established patterns. AI can help significantly here because the abstractions are mature. The design space is about assembling proven pieces correctly.

The iOS puzzle game developer is working in a design space that is code. The systems and rules of the game are codified in the logic. The thing that makes the game good or bad lives in how those rules interact. This person would rather write the vast majority of the game by hand, with soft usage of AI, because they need to maintain full understanding of the system. The code isn't infrastructure. The code is the product.

All three of these people are right about their experience with AI. They're just working in different design spaces with different surface areas.

Whatever your job is, whatever it is you bring to the table, you should own the design space of what you're building.

If the design space truly lives in code, you should be handwriting that code, or at minimum, deeply understanding every line that gets written. You can use AI to move faster, but you need to be the one holding the map.

Here's the problem: AI creates a workflow where, if you let it go unchecked, the AI starts driving you. You prompt, it generates, you accept, you prompt again. At some point you cross a threshold where no one in the room can reason about the system anymore. The AI contributed most of the code, but you own the mistakes. You own the consequences. There's an asymmetry there that gets dangerous when the stakes are high.

This threshold is actually the line that defines ownership. Before it, you're directing the AI. After it, you're just along for the ride. Before it, you can claim authorship. After it, you can't.

This touches on a deeper question about whether AI can be an author at all. I'll leave that for another essay, but for now: if you can't reason about the system, you're not the author of it, regardless of whose name is on the commit.

So much of the present confusion about AI and work comes from people comparing incompatible design spaces. The static website builder and the puzzle game developer aren't having the same experience, and they shouldn't expect to.

This is also where the "X job is cooked" takes fall apart.

Software engineering isn't being automated. Art isn't being automated. What's happening is that automation is forcing a question:

Is the design space your job asks you to occupy the same as the design space the company actually needs occupied?

If you're contributing to the core design space of the business, you're fine. If your job is adjacent to that space, doing work that AI can now handle, the ground is shifting beneath you. This has always been true of technological change. The difference now is the speed.

The language we use to describe this moment isn't helping. "Vibe coding" gets thrown around a lot, but it's not a serious term. It implies you're programming without understanding, just prompting and hoping. That's not what most people are actually doing.

Think about writing. Nobody calls a novelist a "typist." Typing is the mechanical act. Writing is the creative one. The same distinction exists in software: coding is mechanical, programming is creative. You can outsource the typing without outsourcing the writing. You can outsource the coding without outsourcing the programming.

What's emerged is a new layer: using AI to handle the mechanical parts so you can focus on the design space. I'd call this directed programming. You're directing. The AI is executing. You know where you're going. You're still the one driving.

Directed programming is sustainable as long as you stay on your side of the threshold. The moment you lose the ability to reason about the system, you're not directing anymore. You're just prompting.

There's one more conversation happening that I've intentionally left out of this essay.

Some people argue that the future of work is building systems where agents do everything and you focus entirely on judging the quality of their output. You don't direct. You evaluate.

I think this argument deserves serious consideration, but it's a different conversation. Everything I've said up to this point assumes a human in the loop, using AI as a tool. The agent model assumes AI runs autonomously and humans assess outcomes.

That model creates a vacuum. If the agent is doing the work and you're only judging, who owns the result? Who's responsible for the consequences? Where does authorship live?

I'd argue this leads to a fundamental split: AI operates in the design space of what. Humans operate in the design space of why. When AI handles all of the what, the only thing left for humans is the why. The intent. The reason it exists.

But that's a different essay about two different classes of creation. For now, the question is simpler: where does your design space live? And are you the one driving?

Both people in that coffee shop are telling the truth. They're just building different things.

The real question isn't whether AI is good at code. The real question is whether you know where your design space lives, and whether you're still the one directing what happens inside it.