Citation: Dell'Acqua, F., McFowland III, E., Mollick, E., Lifshitz, H., Kellogg, K. C., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2026). Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of Artificial Intelligence on Knowledge Worker Productivity and Quality. Organization Science. https://doi.org/10.1287/orsc.2025.21838
PDF: dell-acqua-et-al-2026-navigating-the-jagged-technological-frontier.pdf · CC-BY 4.0 · Published online 11 March 2026
Why this is in our research: This is the paper that introduced the "jagged technological frontier" concept we borrow for the title of pitch-and-overview.md ("The Other Jagged Frontier"). We owe the source a credit and a clear statement of what we mean by other. The paper is also a foundational empirical reference for any claim about AI's heterogeneous effects on human workers — useful well beyond the title nod.
What the paper shows
Preregistered field experiment with 758 BCG knowledge workers assigned to one of three conditions: no AI, GPT-4 access, or GPT-4 + a prompt-engineering overview. Across 18 realistic management-consulting tasks spanning creative-to-analytical work:
- Inside the AI capability frontier: subjects with AI completed 12.2% more tasks, 25.1% faster, and produced significantly higher-quality solutions than the no-AI control.
- Outside the frontier (one task deliberately chosen to be in AI's weak zone): subjects with AI were 19% less likely to produce a correct answer than the no-AI control.
The "jagged frontier" is the metaphor for this asymmetry: AI capability is not a smooth gradient. Some tasks are inside the frontier and AI augments performance dramatically; some tasks superficially look identical but are outside it, and AI use actively degrades performance. Workers cannot reliably tell which is which from the task surface.
The paper also identifies two adaptation patterns:
- Centaurs — humans who divide work between themselves and AI by task, using AI inside the frontier and not outside it.
- Cyborgs — humans who interleave AI use deeply into their own work moves, regardless of frontier position.
Both can be effective; both fail in characteristic ways when applied to the wrong side of the frontier.
How we use it
The title. We borrow "jagged frontier" because the metaphor maps onto the population our pitch is about, but inverted:
- Their jagged frontier: knowledge workers above the digital-fluency floor, who must learn which tasks AI can vs. cannot handle.
- Our "other jagged frontier": the third to half of US adults who never reach that floor in the first place. For them the frontier is not which AI tasks succeed — it is whether they can operate the cognitive scaffold that AI use requires at all.
Both populations face frontier-recognition problems. Theirs is which task is inside AI's capability. Ours is which adults are inside the cognitive prerequisite for any AI task. Naming our pitch "the other jagged frontier" is a deliberate echo, not a coincidence.
The empirical claim about AI heterogeneity. When we say "AI's productivity gains are real and growing, but they are accruing only to people already above this line," the Dell'Acqua paper is part of why we can say real and growing without hedging — the +12% / +25% / quality effects on the right side of the frontier are unusually clean evidence in the AI-and-work literature.
The metacognition theme. The paper's "centaur vs. cyborg" framing maps directly onto our pedagogy's emphasis on metacognitive monitoring. A worker who cannot tell when AI's output is wrong is operating below the frontier whether or not they know it — exactly the third capacity in our ladder ("evaluate AI output against the real task").
Citation strategy
- Title footnote in
pitch-and-overview.md— credit Dell'Acqua et al. for the original metaphor, briefly explain what other means. - We do not need to cite the paper for the broader "AI augments knowledge work" claim — there's a richer literature for that (Eloundou et al., Brynjolfsson et al., Noy & Zhang). The Dell'Acqua paper is specifically the source of the jagged frontier framing.
Open threads
- The paper's "outside the frontier" finding (AI use → 19% worse) has implications for our curriculum design: teaching adults to recognize when AI is wrong is harder than teaching them to use AI when it's right. This belongs in
pedagogy.mdif it's not already there. - Frontier identification as a teachable skill is underexplored in the paper itself — they document the asymmetry but don't train workers to recognize it. That's an opening our project's metacognitive design could address.