context transfer is dead

on the thing nobody's talking about, the spreadsheet that broke my brain, and why your 21-year-old intern might be more valuable than your VP.

April 2, 2026 ยท 14 min read

I host an AI consumer meetup in Seattle. Every month we have 50-80 people show up. Running it requires the boring stuff nobody thinks about: tracking RSVPs, updating a spreadsheet with attendee info, pulling LinkedIn profiles, organizing follow-ups.

My co-host has a real Executive Assistant. Her name is Jan. She's good at her job.

Here is the version history of the spreadsheet Jan maintains for our events:

200+ individual edits. Spanning February 6th through February 20th. On February 12th alone, Jan made 80+ edits between 7:31 AM and 2:32 PM. Seven hours. One person. Tab-switching between LinkedIn, the RSVP list, and the spreadsheet. Copy. Paste. Format. Next row. Copy. Paste. Format. Next row.

I did the same thing last week. It took 3 prompts.

30 Doozy minutes replaced countless human hours of work. That I used to do. I remember wishing this was solved. And I can't believe that we've solved it.

Not in a overly complex, technical, CLI native way (๐Ÿ‘€). In a way that anyone, with any level of "ai fluency" can interact with.

Not 3 hours. Not 3 sessions. 3 prompts. The AI pulled the RSVP data, cross-referenced LinkedIn profiles, populated the spreadsheet, and formatted everything. While I was drinking coffee.

This is not a productivity improvement. A productivity improvement is 20% faster. This is a category change. And I think it reveals something about how work actually works that very few people are thinking about clearly.


Here's the thing everybody gets wrong about why knowledge work is slow.

The standard complaint is "I'm in too many meetings" or "I spend all day on email" or "Slack is killing me." And people look at those stats - Microsoft says 57% of your day is communication, Asana says 60% is "work about work" - and they conclude: we need fewer meetings. Better tools. Focus time.

But that's treating communication like it's one thing. It's not.

Communication is actually two completely different activities wearing the same trenchcoat:

1. Discussion and planning. "What should we build? Why? What's the priority? What does the customer actually need? Is this the right approach?" This is irreducibly human. It requires judgment, taste, context, and the ability to sit with ambiguity. This is the good stuff. This is where value gets created.

2. Context transfer. "Let me get you up to speed." "Here's what happened in that meeting." "Can you update the spreadsheet with the new attendee info?" "I need you to pull the data from this tool and put it in that tool." Moving information from Point A to Point B in a format someone else can use.

Most people have never separated these two in their head. They feel like the same activity because they happen in the same meetings, the same Slack threads, the same email chains.

But they are fundamentally different kinds of work. One requires a human brain. The other requires... a clipboard.

And AI just made the clipboard one free.


Six months ago, Tyler and I figured something out while building our product.

We're two founders. No engineers. Our codebase is 300,000 lines. We have 3-6 AI coding agents running in parallel at any given time. We wrote about the workflow recently and it blew up because people couldn't believe it.

The whole thing runs on four steps: discussion, plan, implement, review.

Humans do steps 1 and 2. The AI does steps 3 and 4.

That's it. That's the entire insight. We think. We decide. We spec. Then the AI builds it, and we review what it built.

But here's what took us months to realize: those four steps aren't a coding workflow. They're a THINKING workflow. And they apply to literally everything.

Content? I'm creating multiple pieces right now for all of next week, in less than 30 minutes. The old version of this was a week of effort. Staring at blank docs. Multiple drafts. Research tabs open everywhere. That feeling of "I know roughly what I want to say but I can't quite get there."

Now? I do the discussion (I ramble at my AI about what I'm thinking, it probes me with hard questions until the idea sharpens). I do the planning (here's the structure, here's the angle, here's what makes it different). Then the AI drafts and I review. Same four steps. Same split.

Admin work? The spreadsheet thing with Jan. 200+ edits across five days. An entire human being doing context transfer - pulling from one source, reformatting, putting in another source. I do the discussion ("here's the event, here's the RSVP list, I need LinkedIn info and contact details in this format"). The AI executes. I review. Three prompts.

Financial reports? We pull 12-report Xero exports that used to take half a day. Same pattern. I tell it what I need and why. It goes and gets it.

The workflow is always the same. The human does the thinking and the tasting. The AI does the context transfer and the execution. And the gap between those two groups of activities is so large that when you separate them, it feels like time travel.


Ok so here's where it gets scary.

I keep saying "discussion" and "planning" are the human parts. And they are. But let me be specific about what makes them human.

It's not knowledge. It's not having information in your head. It used to be - the reason you needed a senior engineer or a veteran marketer or a subject matter expert was because they HELD context. They'd seen the patterns. They knew where the bodies were buried. They had 20 years of "well, the last time we tried that..." loaded into their neural networks.

But context retrieval is now a process. Not a person.

When I run /discussion on a feature, the AI spawns subagents that explore the entire 300k-line codebase before I even ask a question. It already knows the patterns. It already found the relevant files. It already mapped the dependencies. The context that used to live exclusively in a senior engineer's head is now available to anyone who knows how to ask for it.

You will never have perfect planning. Ambiguity is irreducible. Humans will always need to make judgment calls, weigh tradeoffs, decide what to prioritize.

But you can have near-perfect context gathering.

And when context gathering becomes free - when anyone can retrieve, synthesize, and format huge amounts of information in seconds - what happens to the person whose entire value was holding that context?

I'm not saying experts become useless. I'm saying expertise gets redefined. The value stops being "I know things you don't" and starts being "I have taste you don't." The ability to look at the AI's output and say "no, that's wrong, and here's why" - that requires having done the hard version yourself, having built the muscle, having made the mistakes.

But the ability to look something up, compile a report, transfer information from one system to another, get someone up to speed on what happened yesterday?

That's just context transfer. And it's dead.


I have a confession. This whole thing is starting to scare me.

Not in a "the robots are coming" way. In a "the asymmetry is so extreme that it feels unfair" way.

I am sitting here right now, creating a week's worth of high-quality content - blog posts, LinkedIn posts, Twitter threads - in under 30 minutes. Not bullet point outlines. Not "starter drafts I'll spend 4 hours refining." Actual pieces. With research. With stats. With personal anecdotes woven in. With structure. The kind of content that used to represent a week of my focused effort.

The spreadsheet that took Jan 5 working days, I did while finishing my coffee.

Tyler and I built 300,000 lines of production software with zero engineers.

And here's the thing - I'm not even particularly special at this. I'm a 22-year-old who figured out a workflow. The workflow isn't magic. It's four steps: discuss, plan, implement, review. Anyone can learn it.

But almost nobody IS learning it. Because almost nobody is proactive enough to sit down and figure it out from scratch. They're waiting for a course. Or a YouTube tutorial. Or their company to mandate it. Or someone to show them how.

The extreme amount of value this unlocks is so asymmetric for proactive people that I don't think most people have processed it yet.

Steve Yegge wrote about the AI Vampire - who captures the value from AI productivity gains? His answer: it depends, and the fight between employers and employees over that value is going to define the next decade.

But I think there's a layer underneath. Before we even get to "who captures the value between employer and employee" - the value only exists if someone is proactive enough to create it. And right now? The world splits cleanly into people who are creating insane amounts of value with these tools and people who opened ChatGPT once, asked it to write an email, thought "meh," and went back to copy-pasting between tabs.

The gap isn't 10%. It's not 2x. It might be 50x. And it's invisible. Because the proactive person and the reactive person look the same on paper. Same title. Same salary. Same Slack avatar. One of them is operating in a completely different reality.


I keep meeting people who prove this.

Two of my close friends - both 21 - just joined a company founded by Shivaa, the ex-iPod founder, called Arkero. They got hired because they knew how to wield AI agents effectively for engineering. Not because they had 10 years of experience. Not because they had Stanford CS degrees. Because they could do the discuss-plan-implement-review loop without being taught it. They'd absorbed it by using the tools every day since they were available.

I'm watching this happen across the board in Seattle. A notable startup recently laid off 35% of its engineers because the team was resistant to adopting AI. The people who replaced them aren't more experienced. They're more proactive. They're AI-native.

I ran an incubator program at AI2 for two years. I watched dozens of startups. The pattern was always the same: the founders who won were the ones who moved first. Not the ones with better ideas, or more funding, or deeper expertise. The ones who tried things before they were ready. Who shipped before it was polished. Who sent the cold email before they had a reason to.

That was true before AI. But AI turned it into an exponential. Because now, moving first doesn't just give you a head start. It gives you a compounding advantage. Every time you delegate something to AI, it gets slightly better at doing that thing for you. The training data improves. Your prompting gets cleaner. Your review gets sharper. The feedback loop tightens.

And the people who haven't started yet? They're not just behind. They're falling behind faster every day. Because the gap compounds.


Here's what I think is happening at a macro level, and I think it's the most important thing in this post:

The people best positioned for this aren't the 40-year-old corporate experts with decades of experience.

They're the kids in high school and college right now.

I know this sounds like a hot take. It's not. Let me explain.

The corporate expert's value was always a combination of taste AND context. They had good judgment (taste) AND they knew where all the information was (context). These two things were bundled together because context was expensive to acquire. You had to work somewhere for 10 years to know where the bodies were buried.

AI unbundles them.

Context is now free. Anyone with the right workflow can retrieve, synthesize, and apply context from any domain in minutes. The 10 years of institutional knowledge? It's in the docs, the codebase, the email archives, the meeting transcripts. The AI can find it. The AI can synthesize it. The 21-year-old with the right process has access to the same context as the 20-year veteran.

So what's left? Taste. Judgment. The ability to evaluate, decide, and direct.

And here's the uncomfortable truth: taste isn't exclusive to experience. Some people have it at 21. Some people never develop it. It's a muscle, not a credential. And the new generation is building that muscle faster than any generation before them because they're spending more time in the discussion-and-planning layer (the taste layer) than any previous generation ever could. The context transfer that used to eat 60% of their day is just... gone.

They're not skipping the hard parts. They're spending MORE time on the hard parts. Because the easy parts disappeared.

When I was 17, I spent weeks manually compiling course information for Course Finder. Most of that was context transfer - looking up data, formatting it, entering it. The actual DESIGN thinking - what do students need? how should this be organized? what's the UX? - was maybe 10% of my time.

A 17-year-old building Course Finder today would spend 90% of their time on the design thinking. The data compilation takes 3 prompts.

That kid is going to develop better taste in 6 months than I developed in 3 years. Not because they're smarter. Because the noise got cleared away and they're spending all their time on the signal.


I keep coming back to something we figured out while building our dev workflow.

The quality of your implementation is entirely downstream of the quality of your planning. We know this because when we skip the discussion and planning phases and go straight to implementation, we end up in what we call the death loop - hours of the AI trying and failing to build something because the spec wasn't clear enough.

The fix is always the same: go back to /discussion. Understand more. Plan more precisely. Scope more tightly.

The death loop is what happens when you try to skip the human part.

And I think the death loop is what most companies are in right now with AI. They're giving people Claude and saying "go be productive" without teaching them the discuss-plan-implement-review workflow. So people jump straight to implementation. They get garbage output. They conclude AI isn't ready. And they go back to copy-pasting between 350 SaaS apps.

The tool is ready. The workflow exists. But it requires proactivity to learn. You have to sit down, figure out how to separate your thinking (discussion/planning) from the execution (implementation/review), and then trust the AI with the execution part.

Most people won't do that. Not because they can't. Because everything in modern work is designed to keep them reactive. 275 pings a day. A notification every 2 minutes. 71% instant response rate. 47-second average attention span.

And here's the final piece that keeps me up at night: as the proactive people delegate more, the AI gets better training data. The bar for what you can delegate rises. Six months ago, I couldn't delegate content creation - the output was too generic, too robotic. Now? this blog post was drafted by the same AI that knows my voice, my stories, my stats, my opinions, my philosophical hangups. Because I've been feeding it all of that for months. The training happened through use.

The ceiling keeps rising. But only for the people who are already in the room.

I think we're watching the emergence of a new kind of divide. Not digital literacy ("can you use a computer?"). Not AI literacy ("can you write a prompt?"). Something deeper. Process literacy. "Can you separate thinking from execution and delegate the execution to something that can do it 100x faster than you?"

The people who figure this out will look like they're cheating. They're not. They're just operating in a world where context transfer is dead, and they've reorganized their entire life around the thing that's left: taste, judgment, and going first.

Everyone else will be Jan. Smart, capable, hardworking. Copy. Paste. Format. Next row.

200 edits over 5 days.

Or 3 prompts over coffee.

It's a crazy time.

-parsa

-parsa