Halvor William Sanden
Colour modes

Anti-design software

Another piece of software claims to be able to generate web interfaces from graphics. It’s a lie. Such a thing is not possible. Not 25 years ago, not now, not 25 years from now. There are two reasons:

  1. Graphics software doesn’t come close to describing interfaces sufficiently. Operating on the premise of the image, the workflow, capabilities and output are inadequate or plain wrong.
  2. The problems the software creates require understanding and reasoning to solve or avoid. Instead, it now turns to probability calculations, which only ensures that more issues make it into the generated interfaces.

It remains to be seen whether delivering the opposite of anything noteworthy useful will finally turn users away from it or if the vendor manages to spin this too into a cycle of misdirected minimal improvement, its users constantly chasing the next version.

Deficient design; difficult development #

Nothing in the web interface’s nature leads us to design with graphics software. Such applications are in use because they promote several misconceptions:

  • Design is a visual exercise that ends in a visual output. It’s reserved for a special kind of people.
  • Coding is production of said visual in a functional environment. It’s reserved for a different kind of special people.
  • Friction and distance between the two are unavoidable because they’re different worlds with different kinds of special people.
  • Product experience equals field expertise.

I don’t know where or when this thinking originated, but it certainly predates the print workflow of 1997 that people get stuck in.

The solution is to reject the first misconception, then the rest falls. By not legitimising the vendor’s deterministic problem definition, we are not letting their limited abilities define the interface and ourselves as capable beings.

This is not an interface #

Images are purely visual, even as physical objects. We can do whatever we want and place anything anywhere in the given format. An image is a fixed representation; it’s neither the thing itself nor sufficient instructions on how to make it. Realism and a high level of detail bring it no closer to being the thing itself; they only cement the properties of the simulation.

Despite having a visual side shown on screens, interfaces are unrelated to images. They are neither fixed visual products nor physical objects, and they have no format. They are part of the web platform, made from meaningful elements that enable and require logic and system. The elements are the defining factor, and the dynamic visual properties come with them, from what they are, what they do and how they are organised. The elements can exist and must work without the visuals; the visuals, on the other hand, are practically nothing without the elements.

Workflows of intent and avoidance #

An interface-based workflow starts with needs, to make content, flows and functionality. Then, we create the structure with the correct elements. After that, any remaining visual and layout work is adjusted. In this, programming is not implementation but design because it involves intentional decision-making.

A sketch-based workflow also starts with needs and moves on to content and flow. But as it skips to visuals and layout without involving elements in the decisions and output, it breaks with the interface and makes an image instead. By creating pieces without regard to what they are, design is confined to the software’s abilities. The core functionality is to enable people to avoid making certain decisions.

For those that define roles based on software features more than on what they are making, the interface’s functionality, logic and affordances end up in limbo; some unclear, some wrong, others missing. This creates problems down the inefficient assembly line. When the image meets reality, solving or working around those issues takes extra work. If the image is upheld as truth, code becomes a way to replicate a sketch as closely as possible, just to get things working, instead of informing intent that would make it better in the hands of the users.

From guesswork to divination #

Code export is as poor as it was in the late nineties because it removes human intent – the definition of design – from the process. Resurfacing a quarter of a century later as “generative” is predictably in line with tech floating into the alternative inside its “AI” bubble. The friction-inducing guesswork the software leads to is now to be compensated with pure divination; since people struggle to make something useful from vagueness and nothing, maybe the mystery machine can do it.

The problem remains: Even with the limited ability to imitate some interactivity and define what some pieces are, a sketch has insufficient information in key areas; either choices aren’t made or cannot be communicated.

Structure #

The structural parts of an interface, like headings and landmarks, must be defined by a human. In the simplest, most linear cases, the best we can do is guess based on size. But there’s no guarantee. It’s the exact same reason why we don’t put images of text on the web.

Even if one could envision software where HTML, ARIA and CSS were click-based and we could define repeat(auto-fill, minmax(min(100%, 16rem), 1fr)) without writing it, we would still need to know its existence, how the users’ environment comes into play and what the structure requires.

It would be like a text editor where we pick all the words from a pool. We would still need writing skills. But if we never type anything ourselves, it’s not certain we see the inefficiency and that we have to wait for the next version to get new words.

Layout #

An interface has no true or ideal representation; the image will only represent what it can look like for a fraction of people. For at least the last 15 years, layout has required design with code to define dynamic size calculations based on the users’ environments and settings. It’s a platform-specific technique unavailable elsewhere.

Style and affordances #

Sketches contain only what we put into them; there are no defaults, unlike the interface, which doesn’t start as blank and empty but is made with elements with default affordances and functionality.

Not defining something is an intentional daily choice when designing with code knowledge; an image cannot communicate such decisions. The focus outline, for instance, rarely needs to be changed, so we don’t write code for that. But if it’s not featured in a sketch, do we remove it or let it be?

The way something looks, how it behaves, and what it is made of are choices best made at the same time.

Function #

A disclosure is an example of a common piece made by putting two native elements together. Even with all the correct visual clues in a sketch, there is no guarantee that someone will build or generate the correct element combination. Sometimes, the most sensible choice is not to use the native elements, but that would require an evaluation and completely different code – something a sketch prevents and a generator cannot perform.

The self-driving car of design #

Tech companies hold us back to lock us in. Their success comes from keeping people from using and developing expertise to make interfaces like interfaces.

They care nothing about workflow and product quality, they don’t enable us to do things we aren’t already able to, they only care that we centre our work around them – matching role descriptions to software specs and storing our work on their computers in their proprietary formats. The disarray caused in products and between people who can and cannot code is a way to dig out a market without inherent reasons to exist.

They make people buy into every new version by replacing quality parameters with their own baseline, redefining reality to fit with the abilities of their products. When the app is trash without regard for accessibility, the next version will be measured on its ability to deliver the tiniest bit of improvement. A smidge better is presented as proof of concept, a sign that unsolvable problems don’t exist and every problem has a tech solution given enough “time and technology” – or rather “money and looting privileges.”

The software’s success in pulling a cover over its users’ eyes while doing things without consent enables it to promise generative functionality’s usefulness as being right around the corner – for years to come.

By choosing the software, we uphold the underlying problems we think it solves. We can write critiques based on accessibility, but if we try to use it for what we cannot in the first place, that’s on us. We must ask ourselves if we are looking for ways not to be the source of an interface’s quality measures and why we are not the source today.