
Growth Design
Growth Design
The Algorithm Finally Learned to Listen
The Algorithm Finally Learned to Listen
Designers who know how to build for what users say they want, not just what they do, will shape what comes next.
Designers who know how to build for what users say they want, not just what they do, will shape what comes next.
by
Sukari Keetin
I started this blog and newsletter because I kept coming across ideas I didn’t see anywhere else. These include thoughts on design in the AI age, building systems for audiences that aren’t always human, and what designers and creative leaders should be thinking about today.
So, to set the stage for this conversation, the first issue begins in an unexpected place: a small feature on Threads that quietly hints at a much bigger change.
Threads now has a feature called Dear Algo
To use it, you make a public post that starts with 'Dear Algo' and add your request. For example, you might ask to see more about independent coffee shops or less about a show you haven’t started. Your feed changes for three days, then goes back to normal.
That’s all there is to it.
Even so, this feature felt different from most launches. It wasn’t about technical achievement. For the first time, a major platform seemed to consider simply asking users what they want.
The silent bargain
For years, digital products have had an unspoken agreement with us.
You use the product. It watches what you do. Somewhere, a model updates, and what you see next changes, showing you more of whatever kept you on the screen a bit longer before.
Nobody spelled this out or asked if you agreed. You just scrolled, the system learned, and the feed became sharper, stranger, and harder to put down.
The logic made sense, in a way. Behavioral signals are everywhere and easy to collect. They’re honest in a limited sense because people don’t lie with their thumbs the way they might on a survey. Watch time isn’t about public image.
But the model had a built-in problem from the start: it was one-sided.
The system knew everything, and the user knew nothing. You couldn’t see or change the signal. You couldn’t tell the feed you only watched that video at 2 am because you were in a strange mood. It didn’t reflect who you are or what you care about.
The algorithm was never broken. It just couldn't listen, and you couldn't talk back.
What changes when intent becomes explicit
Dear Algo, there is a small crack in that model, but the effects could be significant.
When a system asks what you want instead of guessing from your clicks, a few things change right away.
Signal quality gets better. Behavioral data is messy. When users say what they want, it’s more useful than a thousand passive scrolls. Errors that build up in recommendation systems, the ones that slowly change your feed, can finally be fixed.
The feedback loop gets shorter. When users can talk to the system and see a clear response, they can tell if it worked. This creates a very different relationship from the usual black box. It feels more like working together than being watched.
But here’s the harder part: trust has to come first before people will take part.
If users distrust the system or fear their input will be misused, they won’t engage. This conversational model only works if people feel safe and believe their participation matters.
Trust isn’t just a nice-to-have—it’s the foundation.
The Growth Design Implication
From a growth design systems perspective, what stands out to me is how much this changes the design problem.
For years, we’ve built better inference engines, smarter models, richer behavioral data, and tighter feedback loops between what we see and what we show. The main goal has been attention: how do we keep people engaged long enough to learn what they want?
Conversational algorithms raise a different question: what does it mean to design for honesty instead of just attention?
That’s a big change. It affects what you build, what you measure, and which skills matter. Preference scaffolding, or designing ways to help users say what they want even if they’re unsure, becomes a key skill. Feedback visibility—ensuring users can see their input and understand its impact—builds the kind of trust that keeps people coming back. And trust, often seen as a soft metric, becomes the standard for everything else.
This isn’t just about social media. Any system that learns from user behavior—like onboarding flows, recommendation engines, personalized dashboards, or adaptive content—will eventually face this same shift. The platforms are just showing us an early version of where things are going.
This is the territory of Growth Design Architecture, which is about designing not just for what users do, but for the systems that shape what they can do. It sits at the intersection of design strategy, behavioral systems, and business outcomes. Where traditional product design asks "how do we build this well?", Growth Design Architecture asks "what kind of relationship does this system create, and is it one worth scaling?" The shift from attention to honesty isn’t a product decision. It’s an architectural one.
Designers who know how to build for what users say they want, not just what they do, will shape what comes next.
I started this blog and newsletter because I kept coming across ideas I didn’t see anywhere else. These include thoughts on design in the AI age, building systems for audiences that aren’t always human, and what designers and creative leaders should be thinking about today.
So, to set the stage for this conversation, the first issue begins in an unexpected place: a small feature on Threads that quietly hints at a much bigger change.
Threads now has a feature called Dear Algo
To use it, you make a public post that starts with 'Dear Algo' and add your request. For example, you might ask to see more about independent coffee shops or less about a show you haven’t started. Your feed changes for three days, then goes back to normal.
That’s all there is to it.
Even so, this feature felt different from most launches. It wasn’t about technical achievement. For the first time, a major platform seemed to consider simply asking users what they want.
The silent bargain
For years, digital products have had an unspoken agreement with us.
You use the product. It watches what you do. Somewhere, a model updates, and what you see next changes, showing you more of whatever kept you on the screen a bit longer before.
Nobody spelled this out or asked if you agreed. You just scrolled, the system learned, and the feed became sharper, stranger, and harder to put down.
The logic made sense, in a way. Behavioral signals are everywhere and easy to collect. They’re honest in a limited sense because people don’t lie with their thumbs the way they might on a survey. Watch time isn’t about public image.
But the model had a built-in problem from the start: it was one-sided.
The system knew everything, and the user knew nothing. You couldn’t see or change the signal. You couldn’t tell the feed you only watched that video at 2 am because you were in a strange mood. It didn’t reflect who you are or what you care about.
The algorithm was never broken. It just couldn't listen, and you couldn't talk back.
What changes when intent becomes explicit
Dear Algo, there is a small crack in that model, but the effects could be significant.
When a system asks what you want instead of guessing from your clicks, a few things change right away.
Signal quality gets better. Behavioral data is messy. When users say what they want, it’s more useful than a thousand passive scrolls. Errors that build up in recommendation systems, the ones that slowly change your feed, can finally be fixed.
The feedback loop gets shorter. When users can talk to the system and see a clear response, they can tell if it worked. This creates a very different relationship from the usual black box. It feels more like working together than being watched.
But here’s the harder part: trust has to come first before people will take part.
If users distrust the system or fear their input will be misused, they won’t engage. This conversational model only works if people feel safe and believe their participation matters.
Trust isn’t just a nice-to-have—it’s the foundation.
The Growth Design Implication
From a growth design systems perspective, what stands out to me is how much this changes the design problem.
For years, we’ve built better inference engines, smarter models, richer behavioral data, and tighter feedback loops between what we see and what we show. The main goal has been attention: how do we keep people engaged long enough to learn what they want?
Conversational algorithms raise a different question: what does it mean to design for honesty instead of just attention?
That’s a big change. It affects what you build, what you measure, and which skills matter. Preference scaffolding, or designing ways to help users say what they want even if they’re unsure, becomes a key skill. Feedback visibility—ensuring users can see their input and understand its impact—builds the kind of trust that keeps people coming back. And trust, often seen as a soft metric, becomes the standard for everything else.
This isn’t just about social media. Any system that learns from user behavior—like onboarding flows, recommendation engines, personalized dashboards, or adaptive content—will eventually face this same shift. The platforms are just showing us an early version of where things are going.
This is the territory of Growth Design Architecture, which is about designing not just for what users do, but for the systems that shape what they can do. It sits at the intersection of design strategy, behavioral systems, and business outcomes. Where traditional product design asks "how do we build this well?", Growth Design Architecture asks "what kind of relationship does this system create, and is it one worth scaling?" The shift from attention to honesty isn’t a product decision. It’s an architectural one.
Designers who know how to build for what users say they want, not just what they do, will shape what comes next.



