Back Send feedback to ilkka.kuivanen@me.com

Vibe coding prototypes

In this post I will go through some thoughts I have after reading articles about how AI enables prototyping for designers and following the general discussion.

Vibe is debt

I think vibe coding is essentially non-strategic debt, with obligation to pay by understanding the generated content later. A form of "technical debt", or "design debt", depending on the activity.

AI blackbox effectively destroys the capability to rationalise the intended behavior and path via internalised mental models. Prediction-engine generates code that did not come together with iterative mental processing and thinking deeply about the problem. Once the details (e.g. actual lines of code) are trivialised there is no active model how it works. The capability to reason corrupts.

Predictions say AI-bubble is about to burst. Maybe it does, maybe it doesn't. The consequences of using it are already in effect. It appears some designers and developers are already betting on AI in their work by rebasing their thinking, toolchains and activities on top of it.

That might be the wrong bet to make.

I have written earlier about what I think AI hype misses on prototyping and proof-of-concepts wherein I argued some vibe code proponents has fundamental misconception of the activity of prototyping. In this post I will go through the issues of designers using AI to prototype their ideas.

Being stuck in static screens is a thinking problem

In an article Vibe coding makes prototyping close to code, closer to users the authors describe how designers are enabled – through vibe coding – to move beyond Figma mockups towards outcomes that reflect the complexity of real-world use. The previous process, according to the authors, was focusing on how things look is now more about behavior and experiences.

I believe this underestimates both what designers can do and what design is meant to be.

It is unfortunate if a designer is stuck with static screens and needs AI to move further. This is beyond the old debate about whether designers should code. Designers should understand the value of their work and the purpose of design itself. A designer who really wants to pursue further with the idea will figure out a way to move forward beyond Figma screens, regardless of the programming abilities, either via creative usage of tools, presentational tricks or old-fashioned collaboration.

Through exercising their imagination and improving their skills, designers have been able to move beyond the constraints of their tools. To rely on AI as a substitute for this ability counters that very principle. When a designer relies on AI as the way to make progress it signals a limitation in mindset, not enablement by technology. Skills still matter and things like programming are worth learning regardless of the next months advancements with language models.

Epiphanies about what AI can do for you should not replace your pursuit of learning new skills, gaining knowledge, and using your own thinking.

Role of the tool

"But wait, what if AI is considered to be just another tool in the designer toolbelt? Isn't this exactly the case of creative tool usage?"

Traditional tools extend human capability in transparent, predictable and controllable ways. AI is not a traditional tool in that sense. AI is not a neutral instrument. It generates content or decisions you did not explicitly make step by step. Craft requires precision and purposeful instrumentation, but AI is vague and has its own unknown tendencies. For that reason I argue AI should not have the essential role in design workflow, because it gives away too much. This applies to this incarnation of commercial AI at least. Current fact checking capabilities and overall agency capabilities would need to be improved drastically before AI can properly adhere to the user intentions accurately.

Risks of the tool can be managed when the designer has the skill to prototype in the intended medium without relying on AI. Celebrating AI without understanding its output is misguided at best and dangerous at worst. It inverts the learning process, placing hope and convenience above responsibility, craft and agency.

Handover

We are told a dichotomy exists between designers and developers. They are separated by skill and medium. The handover problem is real. The message goes that previously the challenge has been the communicating the visual details to code and now AI has come to bridge the gap and make meaningful collaboration possible.

Let's imagine an AI-designer prototypes an idea that has characteristics of a real thing, but it cannot communicate the output and the nuances of it. In this case the designer is essentially unknowingly obfuscating the implicit behavior. As an example, if a designer does not fully understand component's states in web context and then generates a prototype that has some conditional logic, how does the designer communicate to the development the intention behind the behavior? For a developer, it looks quite defined, because it's literally there in the page source. A large set of assumptions are made instantly. How is this ticking time-bomb defused in controlled manner? Through handovers via changing ticket assignees? By passing over links to blackbox generated prototypes via chats?

It makes sense to add descriptions, disclaimers and annotations to the generated content in order to highlight or warn what might appear to be intentional is actually AI generated. The problem is how can the designer point out those areas if designer lacks deeper understanding of what was generated in the first place.

When developer tries to understand the intention and questions decisions, and AI-designer tries to explain them, the true intention traces back to the AI and language model behavior. This might sound obvious and trivial. In context of getting the last 5-20% done properly this might become a serious payback moment. The designer might have implicitly communicated wrong things, things that never meant to be there. These AI prototypes might make implicit statements on things like client-side routing behavior, loading fonts from third parties, inline scripts that break CSP, odd component structures, or other aspects of the underlying technology. Naturally, such issues can appear in non-AI prototypes as well. The key difference is that a designer with intention can clearly communicate the delta to the ‘to-be’ state, providing the right context and framing.

Understanding side effects and implications is critical, whether intentional or not. In real life, communicating known details from human to human is already difficult. Adding AI to the mix doesn’t make it more efficient and clearer, it makes it exponentially harder. Filling knowledge gaps with AI slop only makes the situation worse.

Language barrier

It might seem that AI-prototyping alone aligns designers and developers and they start to talk the same language, but in reality they are not. In fact, I don't think AI should be used as any sort of middle ground builder. I think basing any sort of serious understanding on AI is risky and has compound negative effects.

Using translator application, or AI, might make it seem like I know Swedish. I don’t. Would you want me to draft your legal agreement in Swedish? The analogy extends further. I can add disclaimers like ‘just an idea,’ ‘first version,’ or ‘concept’, but that still places unnecessary weight on the communication. The machine generated version of the document appears to be more than it is. Things are always lost in translation. Who is going to own the difference between the intention and what was implied? Statements that highlight prototypes as better surface for feedback and conversation hold true only if the surface itself is actually understood.

There is no way to hack around this problem by adding more AI to the mix. Building understanding between two parties requires effort from the parties themselves.

It is also quite telling that the discourse in AI-designer prototypes is not accounting the developer experience. Do the developers want to work with hacky prototypes generated by inconsistent prompts? Maybe they do, but it might be good practice to ask them first. The profession who takes pride on being experts on user experiences might sometime miss the bigger picture, including the experiences of collaborators, purpose of the selected tools and most importantly, the value of one's thinking.

Truly collaborative way has absolutely nothing to do with AI artificially bringing people closer. It's talking, listening and working together as before. Placing AI on pedestal as bridge builder and collaboration enabler is a bad tradeoff. Given people want to be in control of their own thinking, skills, career and decision making.