AI, the Death of Taylorism, and the Rise of Context
If the Twentieth Century was the Taylorist Century, the AI Century will be the Context Century. If before, product risk lurked in the execution of individual tasks, in the Age of AI, it lurks in hidden global context.. The PMs of the AI era, whether human or AI, will be decreasingly assigning work, and increasingly finding and supplying context.
NB: For a good, albeit risky, drinking game, take a drink every time you encounter the word âTaylorismâ in this post.
In 1911, Frederick Winslow Taylor published The Principles of Scientific Management, and the modern workplace was born. His insight was simple but revolutionary: break complex work into discrete, measurable tasks. Time each task. Optimize each task. Train workers to perform their specific tasks with maximum efficiency.
Taylorism gave us the assembly line, the org chart, the job description, and the task management system. It worked brilliantly for manufacturing. And when knowledge work emerged, we simply applied the same framework: break the work into tasks, assign the tasks, track the tasks, complete the tasks. And for a time, it was good.
Taylorism fit well the zeitgeist of Peak Software Engineering, especially the ZIRP Era of Software Engineering: hire expensive software engineers, and optimize, through product analysis, planning, and Taylorist work decomposition, the sets of tasks the engineers should work on. If software engineering is one of the paramount costs in a software organization, then it follows that engineers working the wrong task is the biggest avoidable cost such an organization can encounter.
What happens though to all of this metering, when âintelligence becomes too cheap to meter?â
Iâll cut to the chase: weâll still need task decomposition, prioritization, and metering, but it would be the height of folly to persist in human task decompositions for machine workers, waterfall constraints in the age of instant software, and time-to-completion estimates for one-shotting LLM agents.
I donât have a crystal ball, nor do you, but I increasingly think the future of software product management looks less like âdiscretize into tasks and select the core subset of tasks, in order to optimize time-to-deliveryâ, and more like âfind and supply the right product context, in order to optimize accurate deliveryâ. (If Iâm being candid, I think PMs were already doing this unsung âglueâ work to keep the corpse of Agile moving along).
Now AI is arriving, and everyone's first instinct is the same: which tasks can AI do? We build AI tools that write emails, summarize documents, schedule meetingsâdiscrete, atomized tasks that fit neatly into a Taylor-shaped hole.
This is exactly backwards.
The Taylorist Trap
The problem with task-based AI isn't that AI can't do tasks. It can, often better than humans. The problem is that task decomposition destroys the very thing that makes knowledge work valuable: context.
Consider what happens when you break "product management" into tasks:
- Write PRDs
- Prioritize backlog
- Run meetings
- Talk to customers
- Analyze metrics
- Coordinate with engineering
Each task, isolated, can be automated or assisted. But product management isn't a collection of tasksâit's the synthesis of information across all those activities into coherent judgment. The PM who just came from a customer call brings that context into the prioritization meeting. The insight from the metrics informs the PRD. The engineering constraints shape the customer conversation.
When you atomize the work, you atomize the context. And context is where the value lives.
This is Taylorism's fundamental assumption: that work can be decomposed without loss. That the whole equals the sum of the parts. For physical labor moving pig iron (Taylor's favorite example), this is approximately true. For much of the knowledge work weâd seek to apply AI to, it's catastrophically false.
The Context Revolution
As an AI builder, the criticism of LLM-based AI that resonates with me most is its data inefficiency relative to human learning. LLMs, in pretraining, get much larger amounts of data than children will in the course of their language acquisition. I think this generalizes, very roughly, to LLMs performing at superhuman levels on many large context tasks and subhuman levels on many shallow context tasks. (As an empiricist from the ML/NLP tradition who relishes evaluating machine performance on simple tasks on hold-out datasets, this all pains me to say).
So here's what's actually happening (I suspect) with companies struggling with AI adoption: because weâve overly Taylorized human work (and itâs associated units of context), a lot of the actually useful work is that type of âglueâ work that PMs often do, and is off the books, so to speak. The set of official, Taylorist-approved tasks that live in JIRA and etc, and can give rise to AI POCs, are often too de-dimensionalized, too context-reduced, for LLM-based AI to do well and usefully.
So if the task is an artifact of human limitationâour inability to hold enough context in working memory, our need for checklists and procedures, our finite attention.
AI doesn't have those limitations. So why are we forcing it into task-shaped boxes?
If I extrapolate my hypothesis further: just like early 20th century factory managers unlocked gains from electricity only when they redesigned the factory floor with electricity in mind, the firms that redesign their data and worker infrastructure (or design it for the first time) with deep context in mind, will win the lionâs share of gains from AI. (Apologies if I sound like a sales rep circa 2023 for Big Vector Store).
Cutline, an AI Product Context Manager
I built Cutline to embody this âdeep contextâ view of mitigating product risk. While there are definitely artifacts of âTaylorismâ (MoSCoW priority analysis, for instance), Cutline repurposes such tools as âcontext communication devicesâ, especially when they are plugged into coding agents via the Cutline MCP server. For example, the purpose of sending MoSCoW priorities to such an agent is less to instruct what to work on in which order, and more to inform the efficient frontier of possible all-at-once solutions.
Because Cutline establishes product definition and injects it into via MCP into your favorite coding agent, the coding agent has direct visibility into what you are trying to build âthe product context. That product context can help your coding agent decide the best technical approach for a given task, because the values assigned to trade-off criteria (call it âproduct senseâ) are made explicit in the product context.
Cutline is built on this principle: not task automation, but context synthesis. We don't complete product management tasksâwe maintain comprehensive context and generate integrated judgment. Try a pre-mortem and see what outcome-oriented AI looks like.