Working alongside AI Agents

My work at Shopify is not yet released and currently confidential. Until I can speak freely about it, this post outlines its underlying UX thinking. This post describes creating software with agents in mind and builds on my experience building and designing with these tools.

 

Imagine your company builds a tool involving complex workflows. It can be

  • Reporting health of a fleet of devices

  • Managing advertising campaigns

  • Auditing repositories of source code across an organization

These kinds of tools are typically handled by bespoke, professional grade, software as a service systems. There's an expectation that its users have plenty of domain experience and understand the workflows needed to accomplish their tasks. They also understand that the learning curve is going to be higher than it would be for a consumer application. After all, you can’t build multiple versions for novices and experts, right? (More on this in a bit.)

The emergence of agentive design means these tools will fundamentally change. I believe they will become more accessible (in terms of broadening their audience) and more powerful – in the sense that they are able to combine workflows and create bespoke interfaces to do new things, or to do the same things in new ways.

Let’s put it in the context of 2025: I can open an IDE, and by issuing a few prompts, create a piece of software. It may not be the best piece of software in the world, but it will be functional. It starts to resemble a workflow where I'm limited by only the things I can think up and the time I’m willing to spend debugging its output.

Let's think through what it would mean to apply this way of building to an ad buying workflow.

From imperative to declarative

Our software interfaces today are largely imperative, meaning we give the user a button or a form or input and they take action on those things. Agentive design allows us to move to a declarative stance where the user can express to the system what they want to do. They can do this in any number of modalities – voice, text, natural language, or traditional UI interfaces.

Our user may start off with the imperative interface but they have multiple options to transition into the new, declarative stance. One way is to have a sidebar where the user can at any point toggle over and type in the request to the system. Or an entry point appears as a contextual suggestion when the system predicts you are stuck. Additionally, we could offer suggestions for different starting points – a library of starting points from which the user can pick, and with a single click, be on their way.

This can be really helpful, of course, for the stumbling blocks you see in traditional professional applications, where users know what they want but not how to make it happen.

Beyond that, it opens up sophisticated workflows to totally new audiences. Imagine a new joiner only using this for a few days – they don’t know what they don’t know, but with the prompting and UIs that compose on the fly, they could have successful interactions early on. The same software system could have infinite “versions” each tailored to a user’s level of comfort.

The declarative workflow

So how would this all work for our hypothetical ad campaign manager? The idea is that instead of…

  • clicking New Report Template and

  • filling in the names of the markets they want to buy from and

  • then filling in the creative

…they express to the system something like “Buy $35,000 for Ford in Michigan, Illinois and Indiana metros. Use new Mustang creative.” At which point the system can take over: it can understand what the user is saying, break that down into a series of requirements, and then implement those requirements in the new system.

First, a product manager agent lights up and writes out on the screen its understanding of the user’s goals – now we have a requirements document. It’s all adjustable, so the user can click on a plus button and add the markets in Ohio, maybe.

As the requirements settle, the designer agent arrives. Our front end to the SaaS interface is built from a design system containing the components, styles and UI pieces that we would need. So the agent can pull from that and create a new, purpose-built UI.

Finally, the engineer agent takes in all of the above and references the public API and internal docs to build out what is essentially a third-party app inside the SaaS system. It’s sandboxed and secure so it can’t edit outside its scope.

Ephemeral apps

User declarations can evolve into lightweight, bespoke apps within apps.

This all happens within sixty seconds: from idea to prompt to app. All agent activity is visible in an audit log, so our user can inspect each step and get a bit of text describing why the AI did what it did. If there are any immutable changes (like deleting a cost center or spending that thirty five grand) our agent will pause its workflow and ask for confirmation. All of this builds trust for our user that they can explore the interface without fear of unintended consequences.

And once the ad buy is complete, there is no need to keep this little app around. In the time it takes for our user to navigate to it and modify it, they can just as easily generate a new one with whichever new specifics needed. The app’s performance is measured and logged; its code becomes another data point for the agents to use the next time around.

Next
Next

Using AI to build UX Tools