AI, the Death of Taylorism, and the Rise of Context

If the Twentieth Century was the Taylorist Century, the AI Century will be the Context Century. If before, product risk lurked in the execution of individual tasks, in the Age of AI, it lurks in hidden global context.. The PMs of the AI era, whether human or AI, will be decreasingly assigning work, and increasingly finding and supplying context.

NB: For a good, albeit risky, drinking game, take a drink every time you encounter the word ‘Taylorism’ in this post.

In 1911, Frederick Winslow Taylor published The Principles of Scientific Management, and the modern workplace was born. His insight was simple but revolutionary: break complex work into discrete, measurable tasks. Time each task. Optimize each task. Train workers to perform their specific tasks with maximum efficiency.

Taylorism gave us the assembly line, the org chart, the job description, and the task management system. It worked brilliantly for manufacturing. And when knowledge work emerged, we simply applied the same framework: break the work into tasks, assign the tasks, track the tasks, complete the tasks. And for a time, it was good.

Taylorism fit well the zeitgeist of Peak Software Engineering, especially the ZIRP Era of Software Engineering: hire expensive software engineers, and optimize, through product analysis, planning, and Taylorist work decomposition, the sets of tasks the engineers should work on. If software engineering is one of the paramount costs in a software organization, then it follows that engineers working the wrong task is the biggest avoidable cost such an organization can encounter.

What happens though to all of this metering, when ‘intelligence becomes too cheap to meter?’

I’ll cut to the chase: we’ll still need task decomposition, prioritization, and metering, but it would be the height of folly to persist in human task decompositions for machine workers, waterfall constraints in the age of instant software, and time-to-completion estimates for one-shotting LLM agents.

I don’t have a crystal ball, nor do you, but I increasingly think the future of software product management looks less like “discretize into tasks and select the core subset of tasks, in order to optimize time-to-delivery”, and more like “find and supply the right product context, in order to optimize accurate delivery”. (If I’m being candid, I think PMs were already doing this unsung ‘glue’ work to keep the corpse of Agile moving along).

Now AI is arriving, and everyone's first instinct is the same: which tasks can AI do? We build AI tools that write emails, summarize documents, schedule meetings—discrete, atomized tasks that fit neatly into a Taylor-shaped hole.

This is exactly backwards.

The Taylorist Trap

The problem with task-based AI isn't that AI can't do tasks. It can, often better than humans. The problem is that task decomposition destroys the very thing that makes knowledge work valuable: context.

Consider what happens when you break "product management" into tasks:

  • Write PRDs
  • Prioritize backlog
  • Run meetings
  • Talk to customers
  • Analyze metrics
  • Coordinate with engineering

Each task, isolated, can be automated or assisted. But product management isn't a collection of tasks—it's the synthesis of information across all those activities into coherent judgment. The PM who just came from a customer call brings that context into the prioritization meeting. The insight from the metrics informs the PRD. The engineering constraints shape the customer conversation.

When you atomize the work, you atomize the context. And context is where the value lives.

This is Taylorism's fundamental assumption: that work can be decomposed without loss. That the whole equals the sum of the parts. For physical labor moving pig iron (Taylor's favorite example), this is approximately true. For much of the knowledge work we’d seek to apply AI to, it's catastrophically false.

The Context Revolution

As an AI builder, the criticism of LLM-based AI that resonates with me most is its data inefficiency relative to human learning. LLMs, in pretraining, get much larger amounts of data than children will in the course of their language acquisition. I think this generalizes, very roughly, to LLMs performing at superhuman levels on many large context tasks and subhuman levels on many shallow context tasks. (As an empiricist from the ML/NLP tradition who relishes evaluating machine performance on simple tasks on hold-out datasets, this all pains me to say).

So here's what's actually happening (I suspect) with companies struggling with AI adoption: because we’ve overly Taylorized human work (and it’s associated units of context), a lot of the actually useful work is that type of ‘glue’ work that PMs often do, and is off the books, so to speak. The set of official, Taylorist-approved tasks that live in JIRA and etc, and can give rise to AI POCs, are often too de-dimensionalized, too context-reduced, for LLM-based AI to do well and usefully.

So if the task is an artifact of human limitation—our inability to hold enough context in working memory, our need for checklists and procedures, our finite attention.

AI doesn't have those limitations. So why are we forcing it into task-shaped boxes?

If I extrapolate my hypothesis further: just like early 20th century factory managers unlocked gains from electricity only when they redesigned the factory floor with electricity in mind, the firms that redesign their data and worker infrastructure (or design it for the first time) with deep context in mind, will win the lion’s share of gains from AI. (Apologies if I sound like a sales rep circa 2023 for Big Vector Store).

Cutline, an AI Product Context Manager

I built Cutline to embody this ‘deep context’ view of mitigating product risk. While there are definitely artifacts of ‘Taylorism’ (MoSCoW priority analysis, for instance), Cutline repurposes such tools as ‘context communication devices’, especially when they are plugged into coding agents via the Cutline MCP server. For example, the purpose of sending MoSCoW priorities to such an agent is less to instruct what to work on in which order, and more to inform the efficient frontier of possible all-at-once solutions.

Because Cutline establishes product definition and injects it into via MCP into your favorite coding agent, the coding agent has direct visibility into what you are trying to build —the product context. That product context can help your coding agent decide the best technical approach for a given task, because the values assigned to trade-off criteria (call it ‘product sense’) are made explicit in the product context.

Cutline is built on this principle: not task automation, but context synthesis. We don't complete product management tasks—we maintain comprehensive context and generate integrated judgment. Try a pre-mortem and see what outcome-oriented AI looks like.


Read more about

¡4 min read¡changelog

Cutline now helps you before you have an idea to stress-test. Exploration Mode is a free, AI-guided ideation flow that takes you from vague domain curiosity to concrete, value-ranked product ideas—then seamlessly graduates your best idea into a full pre-mortem analysis.

¡3 min read¡Changelog

Persona creation as a marketing/product exercise has it uses for refining product thinking, but it’s generally left at the ideation stage and isn’t brought forward as any part of success criteria. Testing, on the other hand, does play a part in acceptance criteria, but it generally dwells in the abstract—it’s “triple distilled” from specific customer desire, laundered through organizations that by virtue of their siloing don’t colocate customer success with test definition. I think AI is going to desilo organizations or their would-be replacements quicker than we think, and so I think it’s time to ask the question: what if we could design personas that could actually tell us what they thought of our product? Literally, give them a test account and ask them for their thoughts?

¡5 min read¡Changelog

As a solo developer or small team, you don't have the luxury of a full product team. You need to make good decisions quickly. You need to identify icebergs before they’ve emerged. You need someone to challenge your assumptions. That's what Cutline is: **a product coach that helps you make better product decisions**. It started as ChatGPT prompts. It evolved into a full product powered by Gemini.