From bespoke deployments to vertical solutions to horizontal AGI—here’s where we are and where we’re going
This article is part of our AI Thought Leader Series, which features expert voices in the field of AI
Should you spend months customizing AI for your enterprise, or just wait for AGI (artificial general intelligence that can handle most knowledge work across domains) to arrive and handle everything?
Every CIO is asking this question. And the answer will shape your competitive position for the next decade.
The latest AI models can do almost anything—but getting them to do one specific thing in your organization requires months of custom engineering. Meanwhile, every AI lab promises AGI is just around the corner.
Here’s something interesting. OpenAI, Anthropic, and Cohere are all aggressively hiring “Forward Deployed Engineers (FDEs)”. These aren’t customer success people. They’re engineers who embed at client sites, wire data pipelines, build custom guardrails, and ensure seamless integration.
Why? If these models are so powerful and general, why do they need custom integration for every deployment?
I think we’re seeing a predictable three-stage evolution in Enterprise AI. Stage 1: Bespoke deployments with FDEs. Stage 2: Vertical solutions for entire industries. Stage 3: Horizontal AGI. The FDE hiring surge? That’s Stage 1 at scale. But Stage 2 is already here.
A caveat on “AGI”: Full AGI may never arrive. By AGI here, I mean AI that’s good enough to handle most enterprise knowledge work across domains—not necessarily human-level general intelligence. Think “good enough to close your books automatically” rather than “consciousness in a box.”

Let me show you the evidence for each stage.
Stage 1: The Forward-Deployed Reality
The AI everyone’s excited about requires humans to make it work. Not just any humans—specialized forward-deployed engineers who connect proprietary data sources, build custom evaluation frameworks, handle change management, wire tools and APIs, and set up domain-specific guardrails.
Palantir pioneered the FDE operating model decades ago. Now OpenAI, Scale, and Anthropic list FDE roles explicitly. The numbers tell the story: McKinsey’s 2025 AI survey shows fewer than 10% of enterprise functions have scaled AI agents. They highlight that most AI deployments still require custom integration work.
Why? General intelligence meets specific system needs. The model has broad capabilities, but your enterprise runs on established platforms like Oracle Financials and ServiceNow, with 20 years of institutional knowledge and very specific compliance requirements. Someone has to wire all that together.
This creates an obvious problem. You can’t hire enough FDEs to customize every deployment. So what’s the natural evolution? Package the learnings from bespoke deployments. Build vertical solutions that work across an entire industry.
Stage 2: Vertical Solutions Are Already Winning
Forward-thinking companies aren’t waiting. They are currently building domain-specific solutions. And they’re raising enormous rounds to scale their vision.
Healthcare is the clearest example. Abridge just raised a $300M Series E at a $5.3B valuation. They project 50M medical conversations in 2025. What do they do? One thing really well: ambient clinical documentation. Doctors talk, Abridge listens and writes structured notes.
Why is this worth billions? It’s not a general AI assistant. It’s HIPAA compliant from day one, integrated with EHR workflows, and trained on millions of actual doctor-patient conversations. It solves a specific, expensive problem: clinical documentation takes 2+ hours per day. Hippocratic AI (patient-facing clinical agents) is at $3.5B for the same reasons.
Legal is following the same pattern. Harvey is partnering with A&O Shearman and PwC for M&A workflows. Thomson Reuters’ CoCounsel does “Deep Research” with trusted legal content. Not “chat with your contracts”—actual workflow integration.
The big validation? ServiceNow is acquiring Moveworks for $ 2.85 billion—classic vertical play: front-end assistant plus system of record.
Why is Stage 2 happening now? Three things matter. Governance isn’t optional—regulated industries need tailored controls, audit trails, and safety testing. ROI requires workflow integration—”close the books” beats “chat with your data.” And domain data is the moat—proprietary industry data plus models equals defensible value.
But wait. Not everyone wants the packaged vertical solution. Large, technically sophisticated healthcare organizations, such as the Mayo Clinic, Cleveland Clinic, and Johns Hopkins, view AI as a competitive differentiator. They want clinical documentation workflows tailored to their unique institutional knowledge, proprietary research protocols, and specialized care models.
These organizations will stay in Stage 1. They’ll hire their own FDEs, build their own models, and maintain bespoke deployments. Some organizations prioritize differentiation through custom AI, while others optimize for speed-to-value with standard vertical solutions. Both choices make sense for different strategic goals.
Stage 3: Horizontal AGI Is Coming (But Not How You Think)
So what about AGI? The technology is certainly moving in that direction. Computer-use agents are advancing rapidly. OpenAI’s Computer-Using Agent operates GUIs like humans. Anthropic’s Computer Use can navigate any software. If this hardens, one agent could use SAP, Oracle, and Epic without custom integration.
OS-level assistants are shipping. Apple Intelligence handles system-wide actions with Private Cloud Compute. Google’s Project Astra and Gemini 2.0 are live, multimodal agents. Microsoft has its suite-wide Copilot vision.
Infrastructure is standardizing, too. NVIDIA NIM microservices make deployment trivial. Reasoning models keep improving. All of this favors a portable intelligence layer with thin vertical wrappers.
However, what people often overlook is that even if AGI arrives tomorrow, it still needs to integrate with vertical systems. Why? Governance and audit live in domain systems. ROI comes from workflow integration, not general chat. Enterprises buy outcomes, not intelligence.
Here’s what is really happening. Today’s “reasoning breakthroughs” aren’t AGI. They’re better deliberation—the path to horizontal runs first goes through vertical deployments.
How We Get from Here to There
Stage 1 teaches us what integration actually requires. FDEs figure out the patterns—governance, evaluation, change management, data wiring.
Stage 2 packages those patterns into vertical solutions. Abridge doesn’t need FDEs at every hospital because they’ve productized the integration.
Stage 3 emerges when vertical solutions have standardized sufficiently that a horizontal layer becomes feasible. But the verticals don’t disappear—they become the interface to domain systems and compliance.
The CRM Precedent
Given the unique nature of AI, first principles reasoning is most appropriate here. But reasoning by analogy can help. CRM underwent a similar 20-year evolution: from heavy, on-premise implementations with systems integrators (1990s-2005) to vertical solutions for specific industries (2005-2020), and then to platforms that enabled multiple verticals. However, given the nature of that market, it never converged on a “general-purpose brain.”
“The future might be one brain. But the path there runs through many specialized bodies.”
Computer-use agents could compress Stages 1 and 2, allowing enterprises to skip directly to the horizontal stage. Enterprises are learning quickly—iterating on their AI strategies as the technology continues to mature.
Should you wait for AGI or deploy vertical solutions now? The answer is already showing up in the market. Even AGI believers are shipping vertical solutions—because that’s what enterprises can buy today.
The debate isn’t AGI vs. vertical AI. It’s deployment reality vs. research possibility. And reality is shipping now.
Your turn: Where is your organization in this three-stage evolution? Are you betting on bespoke differentiation, vertical speed-to-value, or waiting for horizontal AGI?
About the Author
Onil Gunawardana is an Enterprise AI product management executive who has led product and data teams at Snowflake, Google, eBay, and LiveRamp. He has created eight major software products that have generated more than $2 billion in incremental revenue. Onil Gunawardana holds degrees from Harvard Business School, Stanford University, and Yale University, and writes about the intersection of data, product leadership, and productivity at Onil Gunawardana’s personal blog.



