· 12 min read

Tooling alone won't make your company AI-native. Here's the other half.

A road trip across Uzbekistan made one thing visceral: individuals shipping software with AI has reached the end of the world. For most mid-market companies, the bottleneck is no longer tooling — it is change management.

A row of yurts at a desert camp on the edge of Lake Aydarkul in Uzbekistan, with sandy foreground brush and a wide pale-blue sky stretching over the Kyzylkum desert.

I had booked the trip from Germany. A two-day overland tour across the Kyzylkum desert in Uzbekistan, the kind tourists do to bridge two cities: leave Samarkand in the morning, drive into the desert with attractions along the way, eat fish from Lake Aydarkul (about five times the size of Lake Constance, and yet unknown to almost everyone back home), sleep in a yurt, drive on to Bukhara the next day with more attractions. A romantic itinerary. A nine-hour pair of drives across hundreds of kilometers of desert.

I arrived at the yurt camp at the edge of Lake Aydarkul late in the afternoon. Tried archery. Stood awkwardly next to a pair of camels. Sat down to a green tea in the shade with a view of the dunes. The camp had Wi-Fi and showers, in the middle of nowhere — which I noted because Wi-Fi in the desert was not what I had expected.

A white teapot, an empty tea bowl with green tea inside, and small dishes of sugar and red snacks on a placemat at the yurt camp common area, with the desert visible through the window in the background.
Green tea, late afternoon. The moment that started this post.

My driver, a man with patient eyes and limited English, sat down next to me with his laptop. I asked what he was working on, expecting an answer like "watching videos" or "messaging family." He said: "I want to work."

I asked what kind of work. And the next forty minutes became a small lesson in 2026.

When he is not driving tourists across the desert, he runs the technical side of his employer's small travel agency. The website that brought me to him — Travel Bliss — is a little catalog of Uzbekistan tours with a contact form. You leave your phone number. The system messages you on WhatsApp automatically with the trip details, available dates, prices. He had built that integration himself against the WhatsApp Cloud API. Once a customer commits, the company needs to dispatch a driver. Drivers in Uzbekistan operate on Telegram, not WhatsApp, so a second integration goes out over the Telegram Bot API: a notification to available drivers with two buttons, accept or decline. Whoever taps accept first gets the booking. Confirmation flows back to the customer on WhatsApp. The driver has the details on Telegram. Nobody is on the phone.

This was not a hobby. This was the customer-acquisition and dispatch backend of a small business. Built by a part-time taxi driver, in his evenings between trips, with Claude Code as his coding partner. He told me he had no formal computer science training. He had motivation, internet, and an AI that would write code with him.

The economic context of the conversation is what made it stick with me. The average monthly nominal wage in Uzbekistan in 2025 was about 6.38 million sum — roughly 500 US dollars at the time. His employer — a small travel agency, not a tech company — was paying for the Claude Max plan on his behalf, around 100 US dollars a month. A fifth of a typical Uzbek monthly salary, recurring, every month, for one tool used by one person. Nobody spends that kind of money lightly. The agency had clearly done the math: the leverage was worth it.

The leverage math

~$100/mo
Claude Max subscription
~$500/mo
Mean UZ monthly wage (2025)

A small travel agency in the desert commits roughly 20 % of one employee's monthly salary to a single AI tool. By any reasonable measure, that is a lot. They evidently believe the leverage justifies it. The interesting question is not whether they are wrong — it is whether companies twenty times their size, with budgets to match, are equally clear-eyed about the leverage they could be capturing per employee.

I ended up spending the rest of the evening as an unsolicited consultant. We talked about how to structure the next iteration. Whether the dispatch logic should move to a queue. What it would take to offer this whole stack as a product to other small travel companies in Central Asia. I left the next morning genuinely impressed and a little unsettled.

Several days later, further west in Khiva, I stayed at a small guesthouse where the host — almost as an aside, between rooms and breakfast — showed me his own booking website. Built by him, on WordPress, without AI. He hadn't yet come across Claude Code. By the time I checked out the next morning, he had an account, and we had spent the previous evening sketching what his next iteration might look like. I went from impressed customer to evangelist somewhere along the way.

One driver in one yurt camp is an anecdote. A driver and a guesthouse host in two different cities, both building software for their day jobs in the evenings, are not a study — two stories never are. But the pattern they are part of — individuals routinely shipping software they could not have shipped two years ago — is. The names change; the pattern stays. Plausibly both of them are in the upper few percent of motivated, technically curious people in their respective trades. Not every taxi driver wires up WhatsApp dispatch. Not every host writes a booking system. But the floor of what one motivated person can ship in a few evenings is now visibly higher than it was two years ago, and that change is not localized to Central Asia.

Key takeaway

In 2026, individual operators with AI tools have more leverage than most mid-market employees. The competitive frontier is no longer "do we have AI?" — it is how fast can our people learn and ship using AI on real work? That depends on two things, in equal measure: tooling and change management. Most companies are still arguing about the first.

The realization

What I saw in that yurt was not a one-off. It is a pattern repeating across the world. Tools like Claude Code, Cursor, GitHub Copilot, Lovable, and dozens of others have collapsed the distance between "I have an idea" and "I have a working system." A motivated individual with internet access and a credit card can now build and operate software that, five years ago, required a small engineering team.

The numbers say the same thing. The Stack Overflow 2025 Developer Survey reports that 84 % of developers now use or plan to use AI tools at work — up from 76 % the year before — with 51 % using them daily. McKinsey's State of AI 2025 finds that 88 % of organizations now use AI in at least one function, up from 78 % the year before. But the same report flags only 6 % of organizations as "high performers" — those that are actually rewiring workflows around AI rather than bolting it onto the side. Adoption is universal. Capture is rare.

The interesting question is not whether AI is real. It clearly is. The interesting question is: who captures the productivity?

The asymmetric advantage

An individual operating with AI tools, on their own data, has three structural advantages over an employee whose AI cannot see their company's data — which is still the situation in most companies, including many that have an "AI strategy."

Full context. The taxi driver's AI knows what he is trying to build, because he tells it everything. There is no compliance officer between him and his data. He pastes screenshots, customer messages, his entire database schema. The AI sees the whole picture.

Full agency. No committee approves his architecture. No procurement process for an API key. No quarterly planning cycle. He decides at 9 p.m. that he wants webhook handling, by 11 p.m. it works.

Full access to tools. He uses whichever AI is best for the task right now. He switches between models. He runs experiments. He throws away half of what he builds the next day.

An employee whose company has not yet connected AI to its actual tools — its Jira, its Confluence, its Slack — works under the inverse of all three. They use AI on fragments: paste a snippet, get a snippet back. They are productive in five-minute slivers, not whole workflows. None of this is the employees' fault. It is what the company is — or is not — providing them.

The point is not that the driver out-engineers a competent in-house team. He doesn't. The point is that motivated individuals now operate at a velocity that, until recently, only well-funded teams could match. That changes what "average" looks like.

The org paradox

Companies have more data than any individual ever will, and their employees have less ability to use it with AI. That is the paradox.

Wharton professor Ethan Mollick has been writing about this gap since 2023. He calls the phenomenon "secret cyborgs" — employees who have figured out, on their own, how to use AI well, but do it quietly because their company has no policy, no tooling, and no learning channel for it. The productivity gain stays trapped inside one person's workflow. It never compounds.

The reasons are usually defensible. Companies cannot just paste customer data into a public LLM. Compliance teams are right to be cautious about PII. Procurement cannot evaluate every new AI tool an employee wants to try. The friction is reasonable, but the cost is invisible: your company is not capturing the productivity that your individual employees are already producing somewhere else, on personal time, with personal tools.

The compound effect you are missing

The damage from the org paradox is not the missed productivity in any one quarter. It is the missing compound effect.

When AI runs on real work, with real context, patterns emerge. Someone on the team finds a clever way to triage incoming Sentry issues. Someone else writes a prompt that reliably converts Slack threads into Jira tickets in the right project template. Someone notices that AI-drafted release notes need a specific footer reference. In a healthy setup, those discoveries become policies — codified, shared, version-controlled. Hooks the whole team benefits from.

In a fragmented setup, those discoveries die in private chat threads. Each new hire has to rediscover them. Two years in, your AI maturity is a function of who happened to figure things out, not a function of company learning.

The two halves of an AI strategy

Closing the org paradox requires two things, and many companies focus on only the first.

TWO HALVES OF AI CAPTURE Tooling SOLVABLE WITH SOFTWARE AI gateway PII sanitization Audit log Hook library Single sign-on You can buy or build it. Change management HANDS-ON, METHODICAL AI retros Hackathons AI champions Show & tell Spike days You have to live it, then codify it. COMPOUND LEARNING VELOCITY Buy or build the left. Live and codify the right. Both are required.

Half one — tooling and compliance. Connect AI clients to the tools your team actually uses. Govern PII. Audit the calls. Enforce write-safety on destructive actions. Let users register their own automation hooks. This is what we build at mcpgate — and what most of the MCP ecosystem is converging on. It is solvable with software. You can buy it, install it, configure it.

Half two — change management. The harder one. Tooling without rituals does not move the needle. You can give every employee an AI-connected gateway and still find, twelve months later, that most of the company is using it like a slightly fancier ChatGPT. The compounding only happens if the organization develops habits around AI: ways of learning, sharing, codifying, experimenting. No software ships these habits. They have to be built deliberately, by people who understand both what they are trying to change and how to make it stick.

It is tempting to read "change management" as "find someone senior, give them the title, and let them figure it out." That is the most common failure mode I see. The half that fails is rarely absent leadership; it is well-meaning leadership that has never personally used AI on a tool the team uses every day. Without that hands-on experience, the rituals get designed in the abstract: mandates without understanding, dashboards without diagnostics, pilots that produce slide decks instead of habits. Ethan Mollick makes the same point from a different angle in his January 2026 essay "Management as AI Superpower": the binding constraint is not technology any more — it is whether leadership has the management skills, including hands-on familiarity, to design experiments and set the incentives that make adoption real.

So the right half of this picture has two parts in sequence, not one. First, leadership has to feel the problem on their own work — has to be the cyborg before they can build the org around cyborgs. Second, they need the operational discipline to take what they have learned — and what people across the company, not just engineering, have learned — and turn it into rituals, defaults, and incentives that survive turnover. Engineering tends to figure out AI tools first, often without much help. Product, customer success, sales, recruiting, operations, finance — that is where the gap is widest, and where the change-management half does the most work. Hands-on, then methodical. Either alone is insufficient.

What change management actually looks like

This is the part I find oddly under-discussed. Plenty of companies are running tooling pilots. Far fewer are explicitly redesigning team rituals around AI. The good news is the pattern is starting to surface in places you can learn from.

A handful of practices that are showing up in companies that take this seriously — including in our own:

Retrospective format additions. The standard retro asks what went well, what did not, what to change. We added two columns: "What did we learn this sprint by working with AI?" at the team level, and "What experiment would I like to try next?" at the individual level. The first one surfaces compound learning. The second one creates psychological permission to try things on company time. Two questions, asked every two weeks, shift the conversation.

AI hackathons. Not the trade-show kind. Internal, quarterly, half a day. A small problem from someone's actual work. Build it with AI. Show what you tried, what worked, what didn't. The output of these is usually three categories: things that should become company defaults, things that should become individual hooks, things that turned out not to work and are now documented as not working — which is itself valuable knowledge.

AI champions per team. One person per team takes informal responsibility for curating examples, hosting weekly office hours, and being the first port of call when someone hits a wall. Crucially, this is a rotating role with explicit time allocation, not a hidden tax on someone's evenings.

Show-and-tell rituals. A 20-minute slot every Friday. Whoever wants to demonstrates one AI thing they tried that week. Sometimes it's brilliant. Sometimes it failed. Both are useful. The key constraint: it has to be on real company work, not a toy demo.

AI learning budgets. A per-person, per-month budget — €50 to €200 — for tools, courses, books, conferences. People do not need permission to use it. The signal to the organization: the company expects you to be learning, on company time, with company money.

Spike days. Half a day every other week, no normal work. Pure exploration. Treated like sick days in the calendar — protected. Anecdotally, the second spike day is more productive than the first, because people spend the gap planning what to try next.

Internal playbooks (and hook libraries). Once a team has a few practices that work — engineering hooks, PM prompt patterns for spec drafts, customer-support reply templates, sales-call summarizers, finance-reconciliation flows — document them in a shared place. Not as artifacts only, but as narratives: what problem we tried to solve, what we tried first, why we ended up with this approach, what to watch out for. Companies that do this build institutional memory across functions faster than companies that document only the engineering code paths.

Where to look for what others are trying

If you want to read further, three sources I keep coming back to:

  • Ethan Mollick — "Management as AI Superpower" (January 2026). Wharton professor, writes weekly about how organizations actually adopt AI. The argument here is exactly the change-management half of this post: the bottleneck is no longer model capability or even tooling, it is leadership, organizational design, and the incentive structures around them. The "secret cyborgs" framing is also his. His book Co-Intelligence is a readable starting point.
  • Tobi Lütke's internal Shopify memo (April 2025). The CEO of Shopify made AI use mandatory across the company and tied headcount requests to a demonstration that AI could not do the work. Read it as a forcing function, not a recommendation — but the framing is unusually direct.
  • McKinsey "The State of AI" annual reports. Boring, useful, and full of usage data segmented by industry and company size. Pair with whatever your favorite skeptic publishes — both viewpoints are real.

Closing

The taxi driver in Uzbekistan did not out-engineer anyone. He had Claude Code, internet, and motivation. He had context — his own life, his own business, his own data — and he had the agency to act on it without asking anyone. That combination is now within reach for almost anyone willing to spend evenings learning.

Companies have more context than any individual could realistically aggregate alone. They also have, in most cases, more motivated people than they give themselves credit for. What they are often missing is the bridge — tooling that connects AI clients to real work, governed in a way the security team can sign off on — and the rituals: the habits that turn individual learning into shared, compounding learning across a team.

How much of a difference any of this will actually make, and on what timeline, is honestly hard to predict. The directional argument is harder to dismiss: organizations that build both halves are running a different experiment than organizations that build only one — or neither. The data will catch up to the hypothesis, eventually. Until it does, the most useful thing is probably to start running the experiment yourself.

Two halves. Both required. The half you have been working on hardest is often not the half that is currently holding you back.

If your team is working on either half — the tooling or the rituals — we'd genuinely like to hear what you have tried. We are collecting practical change-management patterns that work, and the ones that didn't. Drop us a line. If you want to look at the tooling side of this concretely, the mcpgate demo is the fastest path.