The New Product Team
Most conversations about AI and product teams start with the existing roles and ask how to bolt AI onto them. I think that's backwards. A better frame: AI shifts where the value lives inside each job. The bounded execution work gets cheap. The parts that require judgment, taste, verification, and accountability get relatively more important. With the agent carrying the execution load, experts can validate more complex work and handle a wider variety of it than they could before.
If that's the shift, team design starts from the work itself. Strip off the titles, look at what a product team actually does, subtract the parts AI is good at, and see what kind of person fits the rest.
The work, before the tools
A product team handles roughly these activities: strategic framing and prioritization; user research, interaction design, and visual craft; architecture, production code, testing, and performance; SQL, dashboards, experiment analysis, and metric framing; causal inference, experimentation methodology, and proprietary ML modeling; pipelines, data modeling, semantic layers, and observability.
Every one of those is a broad bundle. Inside each, some activities are heavier on judgment and others are more bounded. Today most of the bounded work happens serially. PM writes a spec. Designer turns it into screens. Engineers turn those into code. Analyst instruments it and pulls numbers after it ships. Between each step is a translation, and each translation loses something. The PM explains what they want. The engineer explains what the system actually allows. The analyst explains what the data can and can't tell you. Most of the wall-clock time on any project gets eaten passing context between roles.
What the agent eats
The bounded parts are where AI is already competent. First-draft PRDs, code scaffolding, test generation, mockup variants, SQL writing, dashboard generation, pipeline boilerplate, experiment design, copy variants. These have clear inputs and outputs, and an expert can verify them faster than they could produce them from scratch. That asymmetry is the engine underneath everything else in this essay: where verification is cheap relative to production, the agent takes the floor.
None of it ships without verification. But the heavy lifting on most of it is no longer the bottleneck. What used to take a week of coordination takes an afternoon of prompting and review.
And the translation tax between roles collapses. The PM can get the code. The engineer can get the data. The designer can get the spec. Nobody has to wait for someone else to produce the intermediate artifact that used to be the handoff.
So if you take the bounded work out of each bundle, what's left?
- Judging whether the output is good enough to put in front of customers.
- Having the taste to know which hypothesis is worth testing in the first place.
- Encoding the business into a form the agent can actually reason about.
- Building the models, signals, and proprietary data the agent runs on, and proving what actually moved the metric.
These are the expert cores of the old roles, now freed from the overhead that used to surround them. They don't line up neatly with the existing titles. They have natural affinities that cut across the old role boundaries and suggest four archetypes.
1. Builders
Builders ship the product. The role merges the core crafts of engineering and design. With an agent in the loop, an engineer can prototype interactions and iterate on visual decisions. A designer can ship production code and reason about architecture. The overlap has expanded enough that one person can own the full loop from design through implementation.
The merge doesn't make everyone identical. Design-heavy builders lean toward interaction, frontend craft, and user experience. Engineering-heavy builders lean toward architecture, systems, and backend. Every builder can handle the full loop, but each has a center of gravity. Some technically-fluent PMs sit here too.
The old defining skill was production speed, how fast you could turn an idea into working software. Every serious engineer has felt that shift over the last two years. What's scarce now is verification. Can you tell whether what the agent just produced is actually correct, actually accessible, actually safe, actually on-brand, actually consistent with the rest of the system it has to live in? It's harder than it sounds, and it gets harder as output volume goes up.
We're in the golden age of slop. Models are confident. Their output is coherent. It reads, it compiles, it passes smoke tests. Most of the time it's fine. Every so often it's quietly, dangerously wrong, in a way a fast review won't catch. The builder's job is to be the last line of defense between the agent and the customer. That's a harder and more important job than "someone who ships code."
You don't need as many of these people as a traditional team staffs across PM, design, and engineering combined. You need fewer, better ones, and you need to pay them like the liability is real. It is.
2. Product Strategy
This is the trickiest role and the most political. Product strategy brings together the analytical core of PM, BizOps, strategy, and product analytics into a single continuous loop: analyze the situation, develop a hypothesis about what will work, prototype something to test it (often with the agent), evaluate the results, iterate.
That cycle used to require a PM to frame the question, an analyst to pull the data, a strategist to form the recommendation, and an engineer to build the test. With agent support, one person with strong analytical judgment and strategic fluency can drive the whole loop. They ingest live data feeds, surface anomalies, generate candidate ideas, validate or reject them in hours instead of weeks. The constraint stops being cycle time and becomes the quality of the question and the fidelity of the world the agent is reasoning inside.
The role pairs tightly with builders. Strategic clarity is what turns building velocity into business velocity.
The people filling the seat are the strategic, analytically sharp PMs you already have a few of. The best business operations folks. The product analytics people who have always had the highest taste for what matters. I'm not fully settled on "product strategy" as the label, but the shape of the work is clear.
3. Context Engineers
This is the role that used to be treated as plumbing and now sets the ceiling on everything else. It brings together the representation core of analytics engineering and data engineering: semantic layer design, metric tree architecture, retrieval and eval infrastructure, data contracts. The old job was to get data out of production systems, clean it up, land it in Snowflake, and hand it to a BI tool or an analyst. The audience was humans reading dashboards.
The new audience is the agent. An agent asked "why did activation drop last Tuesday" can only answer well if the activation event is defined, canonical, and consistent across every place it shows up. An agent asked to propose a test can only propose a good one if the segments, metrics, and funnels are sitting there legibly. An agent asked to act, to roll out a change or flag an anomaly or adjust a dial, can only do it safely if the world it believes it's acting on is the world that actually exists.
The quality ceiling on every agent output a builder or product strategist trusts is set by this work. Strong context makes every agent interaction more reliable. That's what gives the role disproportionate leverage.
The second-order implication hasn't sunk in at most companies yet. Product development that leaves the production database in a mess, that requires heroics to extract clean state after the fact, used to be merely annoying. Now it's strategically expensive. Every shortcut a builder takes in how they model state is a tax the agent pays later, every day, forever. The mess compounds.
So the second bottleneck, after verification, is the fidelity of the business world-model inside the agent's context. The companies that keep treating this role as plumbing will be playing catch-up to the ones that treat it as strategic.
4. Data Science
Data science sits across the agent. They need to deeply understand how data is represented, modeled, and stored (the territory context engineers work in) and how to drive business value from it (ranking, recommendations, experimentation, causal inference). That dual depth is what makes them distinct. They connect representation to value.
Start with measurement. Most routine A/B testing should be self-serve: an experimentation platform, treatment-effect tooling, pre-built dashboards, guardrails. No human statistician in the loop for a standard lift test, any more than a routine query needs a DBA. Data science builds that platform and gets out of the way. What they keep is the custom work: causal inference where there's no clean experiment, long-horizon effects, switchback tests, diagnosing results that are too good to be true or too inconvenient for someone to acknowledge. When building is cheap and the agent is ripping through ideas, the danger is shipping a hundred things and not knowing whether any of them helped. Measurement, specifically the causal kind, is what keeps the team honest.
The second half of the role is where defensibility lives. Recommendation engines, ranking, search relevance, pricing, personalization, fraud. Anything where the product gets smarter as more customers use it. Foundation models commoditize fast. What doesn't commoditize is the model trained on what actually happens inside your product, on data competitors don't have.
And data science owns the substrate those models are trained on. What instrumentation captures the behavior competitors can't easily replicate? What proprietary dataset are you quietly building with every session? What signals are you gathering today that compound into a moat by year three? The data assets built now are the moat that stays once foundation models and agent frameworks are undifferentiated commodity.
Put it together: measure what moved, build the models the product runs on, and gather the data that makes those models better than anyone else's. Call it data science, with a sharper definition than the role used to have. This is where the defensibility of the business actually lives.
The picture
Strip the work a product team does down to its pieces. Cross out what the agent eats. Map what's left to the archetype that fits it.
None of these archetypes is strictly new. What's changed is the weight on each, how tight the loop between them has gotten, and the fact that the agent in the middle isn't owned by anyone. Everyone feeds it. Everyone consumes from it. All at once.
An illustrative team
To make this concrete. A traditional product pod is roughly one PM, one designer, ten engineers, one data scientist, plus fractional slices of a data engineer, bizops, and content. Ten to fifteen heads.
The version I'm describing runs more like four builders and four product strategists per pod, with context engineers and data science shared across several pods. That lands around ten heads of dedicated capacity plus a thin shared slice on top. Roughly twenty percent fewer people, covering the same product surface, moving faster.
Numbers are illustrative. The shape is the point: fewer builders, more product strategists, context engineering promoted out of plumbing, sharper data science.
Talent caliber goes up with this shape. Fewer seats means every seat has to be filled by someone who can stretch across boundaries. The designer who can also write code. The engineer who can ship with design judgment intact. The product strategist who combines execution instinct, strategic framing, data chops, and the ability to prototype something real in an afternoon. These composite profiles have always existed. The new shape rewards them where the old one would have pigeonholed them.
And expertise matters more than ever. When the agent can produce anything, the person who can tell whether the thing is actually worth shipping is the one holding the keys. Deep domain knowledge and hard-won taste used to be a nice-to-have. Now it's what you're paying for.
What I'm still working out
The four-archetype shape is what I keep landing on when I decompose the work from first principles, and what I'm watching form in real time at AI-forward companies. I'm less sure how fast it spreads to companies that aren't, and I'm not confident the names will stick.
The underlying logic I do feel confident about. AI compresses bounded execution. Expertise gets more valuable, because experts can do more and verify more. Teams reorganize around the activities that still require human judgment. Experts with agent support are more valuable than the same experts were before.
What it adds up to
A team built this way is smaller, faster, and more opinionated than the one it replaces. The translation tax collapses, because there are fewer roles and the agent absorbs most of the handoff. Verification gets more expensive, because output volume is higher. So does measurement, because without it the whole thing is an illusion.
I'm pretty sure this is closer to the right shape than a traditional PM-designer-engineer-analyst pod is, and I think the teams that get there first will make the teams still staffing the old shape look slow in a way that's obvious within a quarter and embarrassing within a year.