Solved Creative Scale Once. Agentic AI Reopened the Problem
I remember working with a company during my Intuit days where we exponentially increased creative output. At the time, it felt genuinely cutting edge. Crowdsourcing creative had just started to take off, and for the first time we could produce volume at a speed and scale traditional agencies simply couldn’t match. The breakthrough wasn’t just volume. It was velocity. More concepts, faster iteration, broader testing. For a brief moment, growth followed. Wish we had agentic AI back then!
Then the cracks showed.
Quality drifted. Feedback loops became noisy. Decision ownership blurred. Teams argued about what should ship and why. Learnings didn’t compound. The hardest problem wasn’t generating creative. It was operating the system around it.
Fast forward to today, and AI agents can now do much of what once required entire crowds. Creative generation, variation, localization, and optimization are no longer the bottleneck. Output is effectively infinite.
Yet the core challenge hasn’t changed. In some ways, it’s worse.
This is not a creativity problem. It’s not even a decision problem. It’s a decision execution problem.
When frameworks stop running
Most organizations already “use” RAPID, RACI, or DACI. These frameworks show up in onboarding decks, planning documents, and retrospectives. Leaders can usually explain them without much effort.
But if you sit in the meetings where decisions actually get made, those frameworks rarely operate as intended. Decisions stall. Accountability blurs. The same debates resurface quarter after quarter, often with more people and less clarity.
As organizations scale, the cost of this breakdown becomes measurable. Research and real-world operating experience consistently show that decision velocity degrades rapidly as complexity increases. Decision cycle time often slows by 30–50% as companies grow. Internally, 15–25% of launches, campaigns, or initiatives end up reworked because ownership was unclear or alignment came too late. That rework shows up directly in wasted spend, delayed revenue, and exhausted teams.
RAPID, RACI, and DACI were never meant to live as slides. They were meant to run. They describe how decisions should happen, but they were never given an execution layer to ensure they actually do.
What agentic AI actually changes
Agentic AI doesn’t replace judgment. It replaces the missing execution layer these frameworks never had.
The most useful way to think about agentic AI is not as a decision-maker, but as a decision participant. An agent observes workflows, executes defined roles, escalates when rules break, and learns from outcomes. When embedded properly, it turns static decision frameworks into operating systems.
In a modern DTC or growth organization, this becomes very concrete.
An agent supports the Recommend role by assembling decision-ready briefs. It pulls from performance data, experimentation results, creative learnings, financial constraints, and operational metrics. What once took weeks of deck-building now takes hours, and tradeoffs become explicit instead of buried.
Another agent manages Input. Stakeholders still contribute, but the system normalizes feedback, highlights disagreement, and surfaces signal instead of noise. Decision-makers stop sorting raw opinions and start evaluating structured insight.
Agreement stops being invisible. Agents compare assumptions against prior decisions and known failure patterns. Misalignment surfaces early, when it’s still cheap to resolve, rather than late, when it becomes political.
Execution no longer drifts quietly. Once a decision is made, agents monitor outcomes against the original assumptions. When reality diverges from intent, the signal shows up early, not after CAC spikes or a launch misses targets.
The Decide role remains human. It has to. But AI enforces decision hygiene by documenting rationale, preventing shadow decisions, and escalating when timelines slip.
When this system is in place, something important changes. CEOs stop becoming bottlenecks. CMOs stop refereeing debates. Teams move faster with fewer reversals.
The measurable impact
The impact of operationalizing decision execution is not theoretical. Organizations that instrument decision workflows see preparation time for complex decisions drop by 40–60%. Decision cycle times compress by roughly 20–35%. Post-decision reversals decline materially because assumptions are explicit and tracked rather than implicit and forgotten.
Accountability starts to hold under scale, not because people suddenly behave better, but because systems enforce clarity.
This is where many AI initiatives go wrong. Teams jump straight to autonomy without fixing governance. They bolt AI onto broken decision rights and then act surprised when chaos accelerates.
AI does not fix broken operating models. It amplifies them.
How the best teams actually adopt this
The most effective organizations are not redesigning their entire operating model around AI. They start with a mature process that already works and instrument it.
They run RAPID, RACI, or DACI as-is. They measure baseline decision cycle time, rework rates, and outcome quality. Then they deliberately swap in agentic components one role at a time.
Recommendation synthesis comes first. Input normalization follows. Agreement risk detection comes next. Execution monitoring is layered in last.
Each change is measured, not assumed. If the process degrades, they roll it back. If it improves, they iterate. AI is treated as replaceable infrastructure, not a belief system.
This is not experimentation theater. It’s systems engineering.
The real question for DTC leaders
AI has made creative cheap. It has made output infinite. It has not made quality automatic.
The real question for CEOs and CMOs isn’t how much or how fast AI can produce creative. It’s whether the operating model can absorb that scale without breaking.
That was true in the era of crowdsourcing. It’s even more true in the era of agentic AI.
The teams that win won’t be the ones generating the most assets. They’ll be the ones whose decision systems can turn scale into learning, and learning into durable growth.
Sources & Further Reading
McKinsey & Company – How to make better decisions faster
https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/how-to-make-better-decisions-faster
McKinsey & Company – The case for behavioral decision-making at scale
https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/the-case-for-behavioral-decision-making-at-scale
Harvard Business Review – Why Good Leaders Make Bad Decisions
https://hbr.org/2013/02/why-good-leaders-make-bad-decisions
Harvard Business Review – A Smarter Way to Make Better Decisions
https://hbr.org/2011/11/a-smarter-way-to-make-better-decisions
Gartner – Improve Marketing Effectiveness Through Better Decision Making
https://www.gartner.com/en/marketing/insights/articles/improve-marketing-effectiveness-through-better-decision-making
Microsoft Work Trend Index – Will AI Fix Work?
https://www.microsoft.com/worklab/work-trend-index/will-ai-fix-work
OpenAI – AI and productivity research overview
https://openai.com/research
