Dear Operator

Breakthroughs Aren't Sensible

The best startup ideas probably sounded insane when people first heard them. Imagine someone telling you people would pay to sleep in your bed when you're not home, or that you could hop into a stranger's car for a ride. Breakthroughs rarely begin with the most sensible idea.

But something subtle may be changing as founders increasingly rely on AI to generate ideas, strategies, and decisions. Modern AI systems like LLMs are extremely good at producing answers that are coherent, reasonable, and grounded in patterns that have worked before.

And that creates a new tension. Because breakthroughs rarely start with the most sensible idea.

AI systems generate sensible answers

Modern AI systems like LLMs are probabilistic. They generate answers based on patterns learned from enormous datasets, producing outputs that are statistically plausible given what they have seen before. You can push them toward more unusual answers by sampling more widely, but those answers still come from the same underlying distribution.

LLMs work a little like a weather forecast. A forecast analyzes enormous amounts of training data and predicts the range of outcomes that are most likely.

If tomorrow's forecast says cloudy, you might get clouds, maybe some drizzle, or even a short storm. But it would be very unlikely for a hurricane to suddenly appear.

That's because a hurricane sits far outside the probable range of possibilities.

Meteorologists don't predict the future with certainty. They analyze enormous amounts of historical data and current conditions, then estimate the range of probable outcomes. AI works in a similar way. It generates answers based on patterns it has learned from enormous amounts of past data. That makes it extremely good at producing ideas that are probable, sensible, and coherent.

Ask it for a marketing campaign, a product roadmap, or a growth strategy and the answers will usually be sensible. They resemble the kinds of ideas that have worked before and they fit known patterns. That makes AI an extraordinary tool for improving execution.

But breakthroughs rarely begin with the most probable idea, no matter how sensible.

And sensible ideas all start to look the same

As more founders rely on AI systems to generate ideas and decisions, something subtle may begin to happen. Companies' strategies will start to converge. If thousands of startups are asking similar models similar questions, the answers will often cluster around the same set of sensible possibilities.

In many ways, that's good. Execution will improve, best practices spread faster and teams can move quickly without reinventing everything from scratch. But iconic companies are not built on systematic execution.

Great companies have secret sauce in all parts of the organization. It shows up not just in what things get done but also how things get done.

Secret sauce lives in improbable places

Great companies rarely win only through sensible execution. They have secret sauce. Meaning they have unique ideas in places that don't initially look strategic.

Sometimes it lives in the product. Sometimes it lives in the culture. Often it lives in the way a company decides to do something slightly differently from everyone else.

Take Stripe. Payments infrastructure had existed for decades. Companies in the space typically competed on contracts, pricing, and enterprise relationships. But, Stripe made a different bet. They obsessed over the developer experience. Their APIs were clean. Their documentation was beautiful. Integrating Stripe felt dramatically easier than alternatives. At the time, this focus might have looked aesthetic or even unnecessary. But it became a massive strategic advantage.

Developers began pulling Stripe into companies organically. Startups adopted Stripe because it felt like the easiest way to build payments into software. The breakthrough wasn't a radically new product (not that the product isn't awesome). It was a different way of doing something that already existed.

Secret sauce comes from the choices that feel slightly unusual at first but compound over time.

Breakthroughs rarely begin with the sensible idea

Most breakthrough ideas don't start as the most reasonable option. They often begin as something intuitive or maybe even odd. A signal that doesn't quite fit the prevailing pattern. A behavior that looks small or irrational at first but hints at something larger. These kinds of ideas start in one of a few places.

An anomaly

Sometimes a breakthrough begins with something that doesn't fit the data. An atypical customer behavior. A product being used in an unexpected way. A result that contradicts the prevailing assumption about how something works.

Slack emerged when the founders noticed that the internal messaging tool they built while developing a failed game was far more valuable than the game itself.

Most organizations treat anomalies as noise. But occasionally they reveal something important. Many new product categories begin this way. They start when someone notices behavior that looks unusual and asks why.

A collision between ideas

Breakthroughs often appear when ideas from different domains collide. These collisions can initially feel messy or niche. But when the pieces fit together in the right way, they unlock entirely new ways of working.

Figma emerged from the unlikely collision of professional design software and multiplayer collaboration in the browser.

Many of the most important companies of the past decade emerged from these kinds of unexpected combinations.

An intuition about a pattern that hasn't fully formed yet

Sometimes the signal is not data at all, it's intuition. A founder senses that the world is changing in a way that isn't fully visible yet. Maybe the technology is shifting, behaviors are evolving or a constraint that once mattered no longer does.

Shopify began with the intuition that millions of small merchants would want to run their own online stores, not just sell through large marketplaces.

Intuitive ideas often look speculative at first. But founders who act on them early sometimes discover the next wave before it becomes obvious to everyone else.

An atypical way of operating

Breakthroughs are not always product ideas. Sometimes they emerge from unusual choices about how a company runs. It might be treating customers differently, building a different DevEx or structuring the teams or work differently.

Atlassian built a multibillion-dollar enterprise software company with no traditional sales team, relying instead on PLG long before the term existed.

At first these choices may look inefficient or unconventional. Over time they become the company's secret sauce.

The common thread is that none of these ideas begin as the most sensible option. They begin as signals at the edges of what people expect. And those edges are often where the most interesting opportunities live.

AI makes execution faster and ideas more similar

Execution has historically been expensive and valuable inside organizations. It takes coordination, persistence, and hard work to move projects forward. When products ship, campaigns launch, and metrics move, everyone feels it. That momentum creates a kind of dopamine hit for founders and teams.

Today a lot of execution can be done by AI. It can generate campaigns, write code, analyze customer feedback, reply to customer questions and more. Things that once took days or weeks can now happen in hours.

Execution is becoming easier and in many cases it's becoming table stakes. Yet many organizations are still wired to reward the feeling of faster execution.

The risk is that we mistake that momentum for progress while the underlying ideas remain predictable. And if thousands of founders are asking similar AI systems similar questions, the answers will often look surprisingly alike. The models were trained on the same data, they recognize similar patterns and they generate the ideas that are most likely to work.

If you're addicted to execution while using AI, you may end up running your company on the same outputs as every other startup using AI effectively.

  • Strategies start to resemble each other
  • Marketing feels formulaic
  • Product ideas converge around the same set of sensible possibilities

Everyone is moving faster but all in the same direction. Execution accelerates and differentiation becomes rare.

Execution is becoming table stakes

Execution still matters, but AI is changing the equation. Many of the things that once required entire teams can now be done faster and more easily with AI. In many cases, execution is becoming table stakes.

And because AI systems tend to generate similar sensible answers, AI-augmented execution can quietly pull organizations toward similar strategies. Everyone is moving faster and all in the same direction. This changes where founders need to focus their attention.

Build an idea engine, not just an execution engine

If execution is becoming easier and more predictable, founders may need to rethink what they reward inside their organizations. Execution still matters, but in an AI-augmented world, it's no longer the primary differentiator. Many companies can now ship faster, launch campaigns quickly, and move projects forward with similar tools.

When every team has access to the same execution engine, advantage may come from something else—the ideas.

That means founders may need to build organizations that surface unusual signals earlier and explore them more seriously. Here are a some ways to do that.

Reset what you reward

Most organizations reward output, shipping projects, launching campaigns, moving roadmaps forward. Those things matter, but they are increasingly the baseline. If everyone can execute quickly, execution alone won't differentiate your company.

Founders should also reward the earlier signals that lead to breakthroughs.

  • Recognize people who surface unusual observations about customers or markets
  • Encourage teams to raise half-formed ideas without needing a full plan
  • Celebrate questions that challenge assumptions, not just projects that ship

If people believe they will only be rewarded for sensible execution, they will stop raising unusual ideas and that's where many breakthroughs begin.

Create rituals for anomalies

Some of the most valuable signals inside a company are anomalies. Noticing a customer using the product in a novel way, seeing a growth channel behave differently than expected or seeing a pattern contradicts prevailing assumptions. Don't treat these as noise. Instead, create rituals that explore them.

  • Invite conversation about strange data points
  • Create spaces where teams bring forward unusual customer stories
  • Question not just "how do we fix this?" but "what might this mean?"

Many anomalies lead nowhere, but occasionally one reveals an entirely new opportunity.

Encourage collisions between ideas

Breakthroughs often appear when ideas from different domains collide. Founders can design environments where those collisions are more likely.

Founders can create environments where ideas move across those boundaries.

  • Bring people from different functions into the same conversations
  • Create channels where half-formed ideas can be shared freely
  • Host brainstorming sessions that explore emerging technologies or market shifts

Ideas are messy and they feel like they are going nowhere sometimes. Be patient and let people explore because occasionally unrelated insights combine into something powerful.

Reward intuition and early signals

Not every important idea begins with data. Sometimes someone senses that something in the world is changing. A technology becoming possible. A shift in customer behavior. A constraint that no longer matters. These signals often arrive before the data fully confirms them.

  • Create room for people to explore those intuitions
  • Encourage teams to raise hypotheses before the evidence is complete
  • Ask "what might be changing?" not just "what do the metrics say?"
  • Create space for discussing emerging patterns in technology and markets

Many intuitions will prove wrong, but occasionally one will point toward the future before it becomes obvious.

Protect weird experiments

Breakthrough ideas often begin as experiments that don't look efficient. A side project based on a hunch might yield gold. A strange prototype or an unusual approach to solving a problem might turn into a game changing move.

Forward thinking founders and operators protect this kind of work.

  • Encourage small experiments that explore unusual ideas
  • Allow time for side projects that come from curiosity rather than roadmaps
  • Protect early experiments before they are forced into traditional metrics

Your next big idea might come from someone's mad science side quest. Let that happen.

Win the era by creating room for improbable ideas

AI is extremely good at exploring the probable. It generates sensible strategies based on patterns that already exist and that's super valuable. But as AI becomes part of how startups plan, build, and execute, something subtle may begin to happen.

Ideas drift toward the center of the distribution and innovation suffers. Founders and operators that want to win will be more intentional about creating space for the kinds of thinking that lead to breakthroughs.

That means building cultures that notice anomalies, explore weird ideas and build rituals and space for experimentation. AI will help everyone move faster, but breakthroughs rarely begin with the most probable or most sensible idea.

We're All Operators

For decades, software has done more than make our lives easier. It has created one of the largest waves of job creation in the modern economy. We've needed engineers to build the systems, knowledge workers inside companies to run them, and entire ecosystems to implement, service, and support them.

Over the last thirty years, that expansion created tens of millions of jobs globally. And more software created more knowledge work, and more knowledge work created more jobs. But that's about to change.

AI is now taking on knowledge work

AI systems are beginning to perform tasks that were once the domain of knowledge workers. Writing, research, analysis, coding, summarization, and customer interaction are increasingly handled by AI. And to say this is progressing quickly is an understatement.

And it may soon take on much more

Anthropic's recent report examines how AI is being used across the labor market and what kinds of work it may be able to perform over time. The findings are striking.

In some occupations like programming and customer support, the share of tasks that AI could theoretically assist with or perform climbs above 70%.

It is easy to see why numbers like this capture attention. Many people read them and immediately wonder what this will mean for their careers. Some jobs will likely disappear. But many will remain.

The real question is who remains

If AI begins to handle a large share of knowledge work, the more interesting question becomes which people will remain valuable inside companies. And why.

We're all operators now

As AI begins to do more of the work, people's roles start to shift. They spend less time doing individual tasks and more time overseeing the AI systems that produce the work. In that sense, we will all become operators.

Not operators in the traditional sense of an operations department, but operators of AI systems. And the operators who shape this next era will tend to share a certain blend of characteristics. Qualities that define the people who build the next generation of technology.

These qualities are not new. Systems thinking, founder thinking, and craft have always mattered inside great companies.

What is changing is who needs them. As more work becomes automated and people spend more time operating AI systems, these qualities start to matter across many roles. Founder thinking is no longer limited to founders. Craft is not only the domain of designers. Systems thinking becomes valuable far beyond operations or leadership roles.

In the AI era, many more people inside a company will need to develop this blend of capabilities. Let's look at them.

Systems thinking

AI is extremely good at performing individual tasks. But those tools still need orchestration. Great operators are systems thinkers. They design workflows, connect tools, and understand how the system should evolve. In the AI era, leverage comes less from doing the work and more from designing the system that produces it.

Most companies have rewarded people who:

  • Focus on a single function or specialty
  • Stay within clearly defined roles
  • Execute tasks assigned by a manager
  • Optimize their part of the organization rather than the whole system

The operators shaping this next era will:

  • Understand how different functions connect across the business
  • Design workflows instead of simply executing tasks
  • Identify constraints in the system and improve them
  • Think about outcomes across the entire company

Founder thinking

Founder thinking is expansive. It pushes beyond the boundaries of what the company already does. It asks what might be possible, even when the answer is not obvious or the path forward is uncertain. Great founders often pursue ideas that initially look improbable. They test assumptions, question the limits of the market, and explore opportunities others overlook.

For many years, companies have rewarded people who:

  • Execute the strategy leadership defines
  • Stay focused on their specific role or department
  • Improve existing processes rather than question them
  • Wait for direction

The operators shaping this next era will:

  • Look for opportunities others have not yet noticed
  • Question assumptions about how the company should grow
  • Start new initiatives rather than waiting to be assigned one
  • Imagine new products, markets, or capabilities

AI systems are probabilistic by nature. They are extremely good at recognizing patterns in what already exists. Founder thinking moves in the opposite direction. It pushes toward ideas that do not yet fit the existing pattern. In the AI era, the people who stand out will often be the ones willing to explore what seems unlikely.

Craft thinking

Craft is the application of taste, judgment, and experience to create something that resonates with people. It shapes how customers experience a product, a brand, or a company. As more work becomes automated, someone inside the company still has to think about the human side of the equation. Craft is important because not everything that matters can be measured or optimized. Some things have to be felt.

For many years, companies have rewarded people who:

  • Focus on producing more output as quickly as possible
  • Prioritize efficiency and scale above all else
  • Follow established templates or formulas
  • Treat communication and design as secondary concerns

The operators shaping this next era will:

  • Apply taste and judgment to how things are built
  • Care deeply about clarity, design, and experience
  • Shape how the company is understood by the people it serves
  • Bring a human sensibility to work that increasingly involves machines

AI can generate enormous amounts of content and output. But it does not possess taste. In the AI era, the people who stand out will often be the ones who shape meaning, experience, and quality in a world increasingly filled with automated work.

Why this matters now

Right now, much of the conversation around AI work focuses on tools and workflows.

People share clever prompts, creative automations, and impressive demonstrations of what AI systems can do. Entire communities have formed around mastering these techniques. And that work is valuable. Learning how to use these systems well is an important skill. But it is only the beginning.

AI capabilities are improving extremely quickly. Many of the techniques that feel novel today will soon become common practice. Building workflows and orchestrating tools will move from being a rare skill to commonplace.

The advantage will not come from simply knowing how to use AI (that will be table stakes).

The operators who stand out in this environment will be the ones who combine technical fluency with deeper capabilities. The ability to design systems, to imagine new opportunities, and to apply taste and judgment to what gets built.

For operators, this means something important. Mastering the tools matters, but it is not the whole game. The real leverage comes from developing the qualities that allow you to shape the systems those tools create.

For founders, the implication is just as significant.

The people who will help build the next generation of technology are not only the ones who can use AI. They are the ones who think like systems designers, founders, and craftspeople. Those are the people worth hiring and holding onto.

Because in the AI era, we are all becoming operators.

The Agent Era Advantage

For decades, software was built for humans.

People logged in, clicked through workflows, and decided when to send the email, approve the expense, update the record, or close the deal.

Those same people noticed when something felt off and fixed it. They handled ambiguous decisions with judgment because their job was not simply to operate software. It was to drive responsible outcomes.

We are now entering a world where agents will begin to use software, and most software was not built for that. Agents don't use software the way humans do and that puts new demands on how software is built.

We are entering software's third era

Software has moved through three distinct eras. Each one changes how the work moves forward.

The Human-Led Era

In the first era, humans operated the system. Software stored data, enforced workflows, and surfaced information. People interpreted context, made decisions, and moved work forward. The system provided structure. The human carried responsibility.

The AI-Assisted Era

In the second era, humans operated the system with AI assistance. Software generated drafts, surfaced recommendations, and made predictions. People reviewed the output, applied judgment, and decided what actually happened. AI increased leverage. The human still carried responsibility.

The Agent-Led Era

We are now entering the third era. Agents increasingly operate the system. Software no longer waits for a person to click. Agents update records, trigger workflows, route decisions, and take action across tools.

Agents are driving work

When humans were the ones moving work forward, software could rely on them to catch mistakes, resolve ambiguity, and slow down when something felt risky. Many important decisions live in people's heads and are never written down.

As agents increasingly execute directly, there may be no human reviewing each step. What used to be absorbed by people now has to be made explicit.

And that's challenging our design assumptions

Most software was designed for humans. Humans are responsible for their decisions. They know when something feels off and bring judgment to ambiguity.

Agents operate differently. They can reason over information and improve rapidly, but they execute within the instructions, data, and constraints they are given. Systems built for humans rely on human judgment to fill the gaps. In the agent era, those gaps have to be handled by design.

Software built for agents has different needs

Software built for agents doesn't look or behave like software for humans. Agents don't care about a beautiful UI. They care about access, information, and rules.

At a minimum, agent software needs:

Capability

The AI must be able to interpret instructions, make decisions, and produce useful results.

Context

The agent must have access to the right information to do the job well. And that information must be unified and consistent enough for agents to reason over safely.

Connectivity

The agent must be able to connect to the system and take real action.

Governance

The system must define what the agent is allowed to do, how its actions are monitored, and when a human steps in.

The first three expand what agents can do. Right now, most innovation is accelerating those layers. Governance is not always as central.

And governance is critical

Now, let's step back. One human, even working a 996 schedule, can only do so much. Agents can run 24/7 and act at a scale and pace no individual could. And scale changes the stakes.

When humans drive execution, responsibility is personal and visible. When agents drive execution, responsibility has to be structured into the system itself.

Without strong governance:

  • An agent can inherit permissions that were designed for human discretion
  • Actions can compound quickly across systems
  • Decisions can become difficult to trace
  • Accountability can blur when something goes wrong

None of this requires malice. It only requires speed and autonomy. Governance defines thresholds, encodes policy and determines when a human needs to step-in.

Models are becoming more capable every day. They can reason over complex information, learn regulations, and infer patterns from internal data. However, intelligence alone does not determine authority. It does not define who is allowed to act, what level of risk is acceptable, when escalation is required, or how decisions are audited. Those are business decisions.

Capability makes agents impressive (dazzling even), but governance makes them trustworthy.

The unfair advantage

When humans ran the system, judgment came built-in. When agents run the system, judgment has to be architected and governance becomes foundational.

Founders who see this early won't treat governance as a checklist item on a roadmap. They will treat it as the backbone of how their product works.

In the agent era, that's an unfair advantage.

Inside the Startup AI Org Chart

You can learn a lot about a company by looking at its org chart. Not just the official version, but the one revealed through who they choose to hire.

Recently I spent time looking at the open roles across a group of well backed AI startups operating at the frontier of this shift, including Cursor, Decagon, ElevenLabs, Harvey, LangChain, Lovable, and others. The sample is directional rather than exhaustive, but it was enough to spot clear patterns.

Across these AI startups, the hiring patterns pointed to a different way of structuring the company. Hiring reveals what a company is investing in and where it expects value to be created. The companies building at the leading edge of AI are signaling how this technology is being built and delivered.

Hiring reveals your real priorities

Hiring is one of the clearest signals a company sends. The roles you open and the capabilities you fund show what you are actually building toward. Over time, those choices shape the organization more than any slide or memo. AI roles are showing up beyond engineering.

AI-specific roles appear across product and design, go-to-market teams including sales and support, and even in legal, operations, and finance.

Across these roles, the word agents appears repeatedly. It signals a shift toward agentic systems as a core product primitive. That shift is visible not just in the technology, but in how these companies are organized.

Engineering and product

Software teams are accustomed to shipping deterministic features. AI products introduce probabilistic systems. Agents, applied AI, orchestration layers, and evaluation frameworks introduce systems that behave rather than simply execute. They require explicit product and engineering ownership.

  • Applied AI Engineer
  • Engineering Manager, Agent Orchestration
  • Software Engineer, Agents
  • Product Manager, Agent Platform

Go-to-market

Buying the product is not the same as getting value from it. In AI use cases, that gap is often wider. AI products require configuration, integration, tuning, and iteration inside customer environments. That reality shows up in titles such as Forward Deployed Engineer and GTM Manager, AI Deployment, where implementation depth is built directly into the revenue motion.

  • GTM Manager, AI Deployment
  • Forward Deployed Engineer
  • Field Engineer
  • Customer Engineer, Agent Builder

General and administrative

AI introduces model risk, regulatory exposure, and compute intensive cost structures. Companies building at the frontier are tightly coupled to their product, which means legal, operations, and finance teams must be as well.

  • AI Ops Engineer, Finance
  • Privacy and AI Counsel
  • AI Public Policy Manager

Hiring cannot run on autopilot

What these AI startups show is that even core roles evolve when the underlying technology shifts. When the product changes, the market changes with it. And when the market changes, the role should change as well.

If you are not periodically auditing your core roles to ensure they reflect the current market, you risk hiring for yesterday's problems.

AI startups are building different companies

AI is not just another feature. It reshapes how a company is built. The companies building at the leading edge are not only building different products. They are building different companies. Their hiring patterns make that visible.

If you want an inside view of where AI is going, look at how these companies are structuring themselves. The org chart tells you what the roadmap does not.

Building an Equity Pitch

In startups, equity is often discussed as if everyone already knows how it works. Candidates nod along, and companies move forward without pausing to check whether there is genuine shared understanding. But that quiet assumption does both sides a disservice.

Equity isn't a course we all take in school. Most of us piece it together over years, across different companies and funding rounds. And when equity isn't understood, it doesn't just create confusion. It creates real business consequences.

Startups lose candidates. Employees undervalue what they have. Retention becomes more fragile than it needs to be.

Startup equity isn't widely understood

Most people are never formally taught how equity works. They learn it gradually, often after signing offers.

When equity is misunderstood, the business bears the cost

Candidates discount equity they don't understand, negotiate in the dark, or regret decisions later. That friction affects hiring and retention.

That's why every startup needs a disciplined equity pitch

A disciplined equity pitch replaces improvisation with structure.

A well-designed equity pitch does three things

  1. Explains the mechanics (e.g., equity types, vesting, exercise mechanics)
  2. Provides company context (e.g., stage, valuation, revenue trajectory)
  3. Highlights competitive design choices (e.g., PTEP, vesting structure variations, liquidity programs)

It must be built with guardrails

Use NDAs when sharing sensitive information. Avoid projections, promises, or speculative math.

The delivery must be trained

Run mock sessions. Teach presenters how to handle tough questions and when to defer to Finance or Legal.

The knowledge must be documented

Maintain a living equity FAQ. Capture real candidate and employee questions and keep the language consistent.

If you want people to value equity, you have to teach it

When equity is taught clearly and consistently, negotiations improve, retention stabilizes, and the company signals maturity and partnership.

Dear Operator

AI is reshaping more than products. It is reshaping how companies are built and scaled. What once felt impossible is being built, and the companies emerging from this period will look fundamentally different from those that came before.

At the frontier of this new and uncharted era, the nature of work inside a company begins to change as well. The people closest to this shift are no longer simply practitioners of functions like marketing, finance, or sales. They are operators of the systems that run those functions.

We are all operators.

Dear Operator is written from inside the making of this era, alongside founders and operators building venture-backed AI companies. If you are inside an AI startup and feel the magnitude of what is unfolding, this is a place to think it through.