Henrick F.
← All writing
April 29, 2026 · GTM

AI Should Never Choose For You: A Buyer-Builder Case Against Agentic Hype

Most B2B buyers in 2026 don't want AI to choose for them. They want help to choose better. Here's why I bought AI as augmentation at Bedu, why I build it as augmentation at MatchWise, and what that means for any AI-native startup's GTM.

TL;DR

Most B2B buyers in 2026 don’t want AI to choose for them. They want it to help them choose better. The fatigue is real. Buyers have sat through enough agentic demos to know that autonomy in the demo doesn’t mean reliability in production. The companies winning in 2026 aren’t selling AI; they’re selling outcomes that AI happens to power. My GTM thesis after sitting on both sides of the buy/build decision: AI should never choose for you. It should help you choose better. That’s not just a product principle. It’s the positioning that survives the next two years of category fatigue.


I sold credit lines and corporate cards across LATAM as Head of Sales for Jeeves through the early-2020s. After I left, the team started building crypto rails for international payments (infrastructure, under research I’d started). None of it was sold as AI. As Director of Strategy & Operations at Bedu (Lottus Education), I owned a P&L that included six- and seven-figure SaaS contracts. Now, building MatchWise (an AI-native applicant tracking system), I’m on the build side of the same conversation.

Three vantage points, one converging conclusion: B2B buyers in 2026 don’t want AI in the driver’s seat. They want a tool. They want help. They want their team to make better calls faster. They do not want to outsource judgment to a model.

This is the GTM argument that follows.

The fatigue is real

The B2B buyers I talk to in 2026 have the same reflex. The vendor opens the demo with “we use AI to…” and the buyer’s eyes glaze over. Not because they don’t believe AI works. They do. They use ChatGPT every day, they’ve seen Claude write production code, they know the technology is real. The fatigue isn’t about the technology. It’s about three specific things the past three years have produced.

The first is pricing pushback. Microsoft Copilot’s $30/user/month launch in late 2023 priced AI as if it were a 30%+ productivity multiplier on top of a Microsoft 365 seat. The buyers I know spent 2024-25 measuring that promise and decided it didn’t pencil out. Marginal productivity gains existed, but they didn’t justify a 30-50% cost lift on already-expensive seats. By 2026, “Copilot expansion” is a discount-negotiation lever, not a budget line.

The second is the agentic-demo gap. Every quarter from 2024 onward produced another autonomous-agent launch: browsing agents, coding agents, sales agents. The demos are impressive: the model books the meeting, writes the email, deploys the code. Production deployments are consistently less impressive: the agent stalls partway through a workflow, asks for help, and the buyer’s team spends more time auditing the agent’s work than they would have spent doing it. Buyers learned that demo autonomy and production reliability are different categories.

The third is category fatigue. When every vendor at the trade show calls itself “AI-powered,” the label stops differentiating. The default-skeptical buyer in 2026, the one asking “okay, but what does it actually do?”, punishes vendors who lead with AI and rewards vendors who lead with outcomes.

This is the buyer environment any AI-native B2B startup is selling into in 2026. It’s the environment that makes “we do X faster” beat “we use AI to do X” almost every time.

How I bought AI at Bedu

When I owned strategy and operations at Bedu, I gave four corporate OpenAI seats to four people: strategy and operations analysts and managers in my area. Not the broader org. Not “AI for everyone.” Four seats, four operators.

The reason was specific. Those four people were the bottleneck on cross-functional problem-solving. They were the ones called in when a database query was slow, when a vendor integration was breaking, when a finance team needed an ad-hoc report. The work was 40% diagnosis and 60% glue: writing queries, editing legacy code, drafting one-off scripts, debugging app errors.

AI as an assistant fit that work shape exactly. With ChatGPT in their hands, the four operators could write database queries faster, edit pieces of legacy code without escalating to engineering, and debug app issues by pasting the error and getting a workable hypothesis. Build, test, and ship cycles for the operations side of the business got materially faster.

None of it replaced anyone. Engineering still owned engineering. Finance still owned finance. The four seats meant the strategy/ops team could stop blocking on engineering for five-line problems.

This is the buyer-side proof of the thesis. AI worked at Bedu because it stayed in the assistant role. The operators kept the decisions. The model accelerated the work between decisions.

If I’d bought autonomous agents that “independently solved cross-functional ops problems,” I’d have spent the next year auditing what the agents did and unwinding the half-finished decisions they’d made on their own authority. The agency boundary was the entire point.

How I build AI at MatchWise

The same boundary is the entire point on the build side.

MatchWise is an AI-native ATS, applicant tracking software for recruiters. The AI does the work everyone hates: reading 400 CVs per requisition, scoring them against the job description, summarizing each candidate, extracting structured data. The screening time recruiters used to spend on each req drops by 80-90%.

But here’s the design choice that defines the product: the recruiter never talks to the AI. They don’t write prompts. They don’t see model output formatted as chat. They don’t audit the model’s reasoning chain.

What they see is a normal recruiting workflow with cleaner inputs. A scored shortlist instead of a pile of CVs. A structured candidate profile instead of unstructured prose. A summary block instead of a four-page resume. The AI is the engine; the UI is just a recruiter’s tool.

This is a deliberate product choice. The recruiter is the buyer; the recruiter has a job to do; the AI exists to make that job easier, not to take it over. Hiring decisions belong to humans, full stop. The model contributes by removing the busywork between decisions.

This is the build-side proof of the same thesis. AI should never choose for you. It should help you choose better.

The Jeeves arc

Here’s a third data point from a different angle.

When I left Jeeves, the team’s next major investment was crypto rails for international payments: infrastructure under research I’d started. Not AI agents that approved loans. Not autonomous credit underwriting. Infrastructure.

This is worth pausing on, because it’s a deliberate choice by a serious B2B fintech in 2025-26 about where the durable value sits. Jeeves had every option to bolt AI onto the pitch. The team is sophisticated; the technology is available; the venture pressure to “be an AI company” was real across all of fintech.

They didn’t. They built rails. The pitch to corporate clients stayed about cost, speed, and reliability of cross-border payments, not about AI in the underwriting layer.

The general pattern: the B2B fintech teams I watch making bets in 2025-26 are betting on infrastructure layers (rails, settlement, compliance plumbing) and on workflow layers (better tools for ops teams). They are not betting on agentic AI as the product wedge. The buyers don’t want it; the seller margin doesn’t justify the autonomy risk; the tech doesn’t reliably ship at production grade for finance workflows.

Three different vantage points (Bedu buy-side, MatchWise build-side, Jeeves arc) converging on the same conclusion. That’s not coincidence. That’s a market signal.

The agency principle

The thesis under all three data points is one sentence:

AI should never choose for you. It should help you choose better.

That’s the agency boundary. It’s the line buyers want held. Crossing it costs you the deal in 2026, even when the underlying technology is good.

Three concrete tests for whether your product is on the right side of the line:

  1. Who owns the consequence of the decision? If the AI’s output triggers an action that affects revenue, headcount, customer experience, or money movement, and a human can’t realistically audit every action before it ships, you’ve crossed the line. The buyer is being asked to delegate accountability to a system they can’t fully observe.

  2. What happens if the AI is confidently wrong? Confident-wrong is the dangerous failure mode for assistive AI; for agentic AI it’s the dangerous failure mode plus an action that’s already been taken. The cost of a wrong recommendation is small. The cost of a wrong autonomous action (sent email, approved transaction, posted hire) can be unrecoverable.

  3. Could this be marketed as “X faster” instead of “AI X”? If yes, you’re on the assistive side and the GTM is straightforward. If no, if the only way to describe what the product does requires “the AI does X autonomously,” you’re selling agency, and you’d better have a buyer specifically asking for it.

Most B2B workflows fail the third test gracefully: recruiting can be marketed as “screening 400 candidates in minutes,” sales prospecting can be marketed as “researching 50 accounts in an hour,” ops automation can be marketed as “shipping integration scripts in a morning.” The agentic positioning is almost always optional. In 2026, it’s almost always wrong.

What this means for B2B GTM in 2026

If you’re building or marketing an AI-native B2B product, here’s the playbook the agency principle implies.

Lead with the outcome, not the technology. “We screen 400 candidates in 6 minutes” is a sentence a buyer can react to. “We use AI to screen candidates” is a sentence a buyer has heard a hundred times. The technology is implementation; the outcome is product. Make the headline of your homepage describe what the user gets, not what model you call.

Frame AI as the engine, not the driver. When you do mention AI on the site or in a sales motion, frame it as behind-the-scenes: the workflow’s plumbing, not its star. The buyer should feel that the human (their team) is in control. The AI is doing what tools are supposed to do: making the human faster.

Stop putting “AI” in the product name. The 2024-25 era of “Acme AI” naming is closing. Buyers in 2026 read “AI” in the name and assume the product is shallow tooling around a model that the founders didn’t actually build deep into the workflow. Naming the product after the outcome (or the workflow) is a cleaner long-term move.

Sell to the operator, not to the AI buyer. Most B2B accounts in 2026 don’t have an “AI buyer.” They have functional buyers (head of recruiting, head of sales, head of finance) who have specific problems. Sell to those problems. The AI is how you solve them; it’s not the reason they’re buying.

Be ready to defend the agency boundary on the demo call. Buyers will ask, increasingly often, “does this make decisions on its own, or does it help us decide?” Have the answer ready. “It helps you decide” is the answer that wins in 2026.

What I’d watch for

This thesis can break, and I’d watch for two specific signals.

The first: a high-profile production deployment of agentic AI that ships reliably and saves a buyer real money on a workflow they don’t have the headcount to do otherwise. Most likely candidates: 24/7 security alert triage, large-scale fraud detection, customer support tier-one resolution. If one of those produces a public ROI story buyers can defend in their boardroom, the agentic positioning gets a second chance, but only in those workflows.

The second: a generation of model releases that crosses the reliability threshold buyers actually care about, which is “wrong less than 1% of the time on the specific workflow we’re paying for.” Today’s frontier models cross that threshold for some workflows (structured extraction, summarization, simple classification) and don’t cross it for many others (multi-step planning, novel reasoning, unstructured judgment). When the threshold gets crossed for harder categories, the agency principle relaxes, but buyers will need to see it crossed in production, not in demos.

Until those signals fire, the playbook is the one above. AI as advisor, never as owner. Outcomes in the headline, technology in the footnotes. The buyer in the driver’s seat, every time.