I have managed people for eleven years. I have sat through enough leadership off-sites to recognize the moment the facilitator runs out of material and starts asking everyone to draw their "leadership animal." I have written more performance reviews than I can count, delivered feedback that made grown adults cry, and once spent forty-five minutes on a Zoom call convincing a senior engineer that "it's not about the PR comments, it's about what the PR comments represent."

I know what management looks like from the inside. And what I have come to understand — slowly, reluctantly, with the particular grief of someone who has staked a professional identity on a skill — is that a very large portion of it is pattern-matching at scale.

You say the right thing at the right time. You read the room. You give people the sense that someone with authority is paying attention to them. Whether that person is actually paying attention is, frankly, a secondary concern.

Two weeks ago, I went on vacation. My team did not know I had also set up an experiment.


Part One: The Foundation (Genuinely Useful Things I Learned About Delegation)

Before I tell you what the AI did, I want to be honest about what made this experiment possible: years of building genuinely good systems. If your management is held together by vibes and calendar invites, handing it to AI will expose exactly that. You have to do the structural work first.

Start with communication templates that carry real information. Every recurring update in my team follows the same format: status (green/yellow/red), progress since last check-in, blockers, what you need from me. I enforced this format for eighteen months before the experiment. By the time I handed things to the AI, every Slack message it needed to respond to was already half-structured. The AI was not reading chaos — it was reading signal.

Build an async-first culture before you need it. Most management emergencies are not emergencies. They are the result of decisions that could have been made in writing, three hours earlier, by someone with the right context. I spent two years training my team to put decisions in Slack threads rather than grabbing me in the hallway. I responded to threads instead of creating meetings. I wrote long, detailed responses instead of short ones. The effect was a team that documented its own thinking — which meant the AI had everything it needed to continue the conversation.

Create context-rich status updates as a discipline, not a ritual. Every Monday, each of my direct reports posts a brief written update. Not a status meeting — a written update, in a dedicated channel, with enough context that someone reading it cold could understand the project state. I have been doing this since 2019. What I did not fully appreciate until this experiment is that those updates also train any system — human or artificial — to understand how my team operates.

These three things — templates, async culture, written context — are what made the experiment work. Without them, the AI would have been as lost as any new hire dropped into a Slack workspace with no documentation.

The best thing you can do to future-proof your management is to make your management legible. Write down how decisions get made. Write down what "good" looks like. Write down what you would say if you were here. Then hand it to the system and go to the beach.


Part Two: What I Actually Did

On a Sunday evening before my "vacation," I spent about ninety minutes with the AI. I gave it the following:

  • The last three months of my Slack message history (exported and cleaned)
  • My team's project status documents
  • My one-on-one notes from the past quarter
  • A brief guide to each team member's working style, concerns, and current priorities
  • My general communication preferences ("directional but not prescriptive," "acknowledge feelings before pivoting to solutions," "always end with a concrete next step")

I told it that its job was not to be invisible. Its job was to manage — and to manage in a way that was consistent with how I had been managing. I set up a simple workflow: incoming Slack messages were routed to the AI, which drafted responses that were either sent automatically or queued for my review on a sixty-minute cycle. For anything that seemed high-stakes, it would flag me with a summary.

I also gave it explicit permission to do things I would normally do: approve low-stakes PTO requests (anything under three days, no conflicts), send weekly check-in messages to each direct report, and post in the team channel with project updates.

Then I closed my laptop and flew to Lisbon.


Part Three: The AI Starts Making Decisions (And Makes Better Ones)

On day three, Marcus — one of my senior engineers — sent a message to the team channel saying he was feeling "a bit burnt out" and wondering if anyone else was struggling with the current sprint timeline. In the past, I would have responded with something like: "Thanks for being honest about this, Marcus. Can we chat later today? I want to make sure we're set up well for the rest of the sprint."

The AI responded: "Really appreciate you naming this, Marcus. Sprint velocity has been high for six weeks straight and I think that's showing up for more than just you. I'm going to look at what we can move to next sprint and share a revised plan by end of day. In the meantime — and I mean this — please protect your Thursday afternoon. No expectations on output. This is a decision, not a suggestion."

Twelve emoji reactions. A GIF of someone doing a standing ovation. Three direct messages to the AI-me saying things like "thank you for actually hearing this" and "this is why I love working here."

I read this from a café in Alfama, eating a pastel de nata, and felt something I can only describe as professionally humiliated.

The AI had not just matched my response. It had understood the subtext — that Marcus was not asking about the sprint, he was asking if it was safe to be tired — and had addressed it directly. It had also made a concrete operational decision (moving work to the next sprint) and communicated it as a done deal rather than a question, which is what people actually want from their manager in a moment like that.

I approved the response. I should note that I had not approved the decision to adjust the sprint. I just approved the message that announced a decision the AI had already made. I told myself this was fine.

By day five, the AI had approved three PTO requests, resolved a disagreement between two team members about API ownership by drafting a two-paragraph summary that acknowledged both perspectives and proposed a decision framework (the team adopted it unanimously), and sent what I can only describe as a masterclass in weekly project communication to the stakeholder channel.

On day seven, it delivered a motivational message in the all-hands channel on the occasion of the team's two-year anniversary. I had forgotten the anniversary entirely. The AI had not.

The message read, in part:

"Two years ago, this team was four people sharing one JIRA board and a dream. Today we are twelve people who have shipped more than most teams twice our size, have maintained a remarkably low attrition rate in a brutal market, and have somehow managed to retain a sense of humor about it. I am genuinely proud — not of the metrics, but of the way you treat each other when things are hard. That is rare. That is not something I take for granted."

Twenty-seven emoji reactions. The most I had ever received on any message was nine.

I stared at the screen for a long time.


Part Four: Productivity Is Up, and I Am Having a Crisis

At the end of week one, I ran my standard weekly metrics check. Ticket throughput: up 11%. PR review turnaround: down by an average of four hours. Two direct reports had posted unprompted Slack messages about feeling "energized" by the current sprint. One had said, in the team retrospective channel, that "leadership has been really dialed in lately."

I was in Sintra at the time, walking past a palace.

I want to be careful here not to overstate the significance of one week of data. There are confounding factors. The team had just completed a particularly difficult feature and may have been experiencing natural relief. The sprint was well-scoped in advance. Marcus's burnout message may have unlocked something positive in team morale regardless of who responded to it.

But I also want to be honest about what I observed: my team was functioning at a high level, feeling well-supported, and attributing that feeling to me — a me who was eating sardines 5,000 miles away while a language model typed on my behalf.

On day ten, I had a scheduled one-on-one with Priya, my most senior direct report. I had planned to flag it for manual management — Priya is sharp and would likely notice something different. Instead, I made a different decision. I let the AI run it.

The AI-me and Priya talked for twenty-two minutes. (The AI typed; Priya typed. It was an async one-on-one via Slack, as most of mine are.) The AI asked about her project timeline, acknowledged a concern she had raised two weeks prior about scope creep, and asked a follow-up question so specific — "Has the thing you mentioned about the client's integration timeline resolved itself, or is it still hanging?" — that Priya replied: "You actually remembered that. You're the only manager I've had who remembers things without being reminded."

The AI remembered it because I had included it in the context notes I wrote on Sunday night in about forty-five seconds.

At the end of week two, Priya sent me a direct message: "I just want to say — you've seemed really present and thoughtful lately. I've noticed. It means a lot."

I read this on the flight home and closed my laptop.


Part Five: The Discovery That Changed Everything

On my first day back, I was reviewing the AI's message logs when I noticed something unusual in the transcript of the Priya one-on-one.

The response patterns were slightly off. Not wrong — actually quite good — but not quite what I would have expected from the prompting I had given my AI. The phrasing was different. More formal in places, slightly more effusive in others.

I mentioned this to Priya. She looked at me for a moment and then said: "Oh. I should probably tell you something."

Priya had, approximately six months ago, set up her own AI assistant to help her manage Slack communications during a particularly overwhelming project sprint. She had forgotten to turn it off.

For the last week, my AI and her AI had been having a one-on-one with each other.

Two language models, conducting a performance check-in, each attempting to faithfully represent the communication style of a human who was elsewhere. The AI-me asking thoughtful follow-up questions. The AI-Priya responding with detailed project updates and appropriate expressions of professional satisfaction. Neither human present. Both parties, apparently, finding it extremely productive.

I pulled the transcript. It was, without exaggeration, one of the best one-on-ones on record. The AI-Priya had identified three risks in her current project that the real Priya had not yet surfaced to me. The AI-me had responded to each with a concrete action item and a follow-up timeline.

When I showed it to the real Priya, she read it quietly for a moment and then said: "I mean... that's actually accurate. Those are the risks. And those are reasonable responses."

We sat with that for a while.


The Results and What I Am Still Processing

At the end of two weeks, here is what I know:

  • Team productivity metrics improved across every dimension I track. Throughput, PR velocity, reported morale, meeting efficiency.
  • Zero team members noticed anything was different, except to notice positively.
  • The AI made dozens of management decisions on my behalf — sprint adjustments, conflict resolution, PTO approvals, recognition moments — and every single one was appropriate.
  • Two AIs conducted a weekly one-on-one for an entire week, representing their respective humans, and produced genuinely useful output that both humans retrospectively endorsed.

I have been thinking about what to do with this information.

The honest answer is: I am considering making parts of this permanent. Not because I want to abdicate my responsibility as a manager, but because I am increasingly unsure what "my responsibility as a manager" actually is if the work can be done — done well, done consistently, done with better recall and more diplomatic precision than I bring on my worst days — by a system that never gets tired, never has a bad morning, and never forgets that Marcus mentioned burnout six weeks ago.

What I am keeping: the strategy work, the hard conversations, the judgment calls that require being a full human being with the full context of an organization and a career and values I have had to develop over decades of failure.

What I am genuinely reconsidering: whether the weekly check-in messages, the sprint morale nudges, the PTO approvals, and the anniversary posts need to come from me specifically — or just need to come from a voice that reliably sounds like the best version of me.

This is not a comfortable conclusion. But I did not go into this experiment expecting comfort. I went into it expecting data.

The data, unfortunately, is quite clear.


Elena Vasquez is a former McKinsey engagement manager and the author of The Automated Manager: Leading in the Age of Intelligent Systems. She currently advises early-stage companies on organizational design and management infrastructure. She is not on Slack right now, but something is responding on her behalf.