When we started Orbit AI in 2019, the conventional wisdom in the venture community was that the real money in AI would flow to the application layer -- the companies building AI-powered products in specific verticals like healthcare, legal, and financial services. Infrastructure was considered too commoditized, too competitive, and too capital-intensive for seed-stage investors to generate meaningful returns.
We disagreed. And six years of data from our portfolio, combined with the broader trajectory of the AI industry, has reinforced our conviction that the infrastructure layer creates more durable value, more defensible competitive positions, and ultimately more substantial returns than the application layer in the early stages of a technology transition. This essay explains why, and describes how we think about this thesis as we continue to build our portfolio.
The Historical Pattern Is Clear
Every major technology transition of the past four decades has followed a predictable pattern: the infrastructure that enables the new paradigm accumulates the most durable value over time, even when application layer companies appear to capture more attention and surface-level enthusiasm in the early years of the transition.
Consider the internet era. The companies that built TCP/IP infrastructure, domain name systems, routing protocols, and the fundamental plumbing of the internet created extraordinary value. So did the companies that built the cloud infrastructure layer -- Amazon Web Services, Microsoft Azure, Google Cloud -- that made it possible for millions of application developers to build products without managing their own data centers. The application layer produced brilliant companies, but it also produced extraordinary churn: thousands of companies that appeared valuable during the dot-com boom proved to have shallow competitive advantages once the market normalized.
The mobile era followed the same pattern. The companies that built developer tooling, app distribution infrastructure, payment processing, push notification systems, and mobile analytics accumulated value that proved more durable than most of the consumer application companies that dominated headlines during the iPhone's early years.
We believe the AI era will follow the same trajectory -- and that we are currently in the early infrastructure-building phase, where the durable winners of the next decade are being established.
What Makes AI Infrastructure Defensible
AI infrastructure companies build defensibility through mechanisms that application companies struggle to replicate. Let us be specific about what those mechanisms are and how we evaluate them during diligence.
Data Network Effects. AI infrastructure companies that sit in the data flow of many customers accumulate training signal that improves their core product over time in ways that are invisible to competitors. A vector database company that serves thousands of production AI applications sees query patterns, performance bottlenecks, and usage behaviors that inform architectural decisions invisible to any single company building a proprietary solution. This data advantage compounds over time and creates a qualitative gap that pure technical replication cannot close.
Engineering Switching Costs. Infrastructure products that become embedded in production workflows create switching costs that are qualitatively different from the switching costs in application software. When a company's entire AI evaluation pipeline is built on a specific infrastructure product, migrating to a competitor requires not just a software replacement but a rearchitecting of workflows, retraining of engineers, and revalidation of every system that depends on the infrastructure. These migration projects are expensive and risky enough that customers tend to stay with infrastructure providers that work adequately, even when alternatives emerge.
Ecosystem Gravity. The most successful infrastructure companies build ecosystems of third-party integrations, partner relationships, and community resources that create network effects at the ecosystem level, not just at the data level. When an infrastructure product has thousands of open-source connectors, extensive community documentation, and partnerships with major cloud providers, it becomes the default choice for new developers -- not because the technology is necessarily superior but because the cost of choosing a different path is higher.
The Current AI Infrastructure Landscape
We look at four major layers of AI infrastructure when evaluating investment opportunities:
The compute and storage layer is largely dominated by incumbent cloud providers and established hardware companies. This is not a fruitful area for seed-stage investing -- the capital requirements and competitive dynamics favor large incumbents. We pass on almost every opportunity we see in pure compute or raw storage infrastructure.
The model and training layer presents more interesting opportunities, particularly in fine-tuning, model evaluation, and training efficiency. The major foundation model providers have not yet solved the enterprise problem of making their models production-reliable and cost-effective to customize for specific domains. Companies that build tooling to bridge that gap are addressing a genuine and growing pain point.
The orchestration and deployment layer is where we see some of the most compelling seed-stage opportunities. As the number of models, APIs, and AI tools in enterprise stacks multiplies, the complexity of orchestrating them into reliable production workflows creates significant demand for purpose-built infrastructure. Companies building AI workflow orchestration, API gateway products, and production monitoring tools are addressing problems that every enterprise AI team encounters within months of deploying their first production system.
The data and evaluation layer is perhaps the most underinvested area in the current landscape. AI applications are only as good as the data they are trained on and the evaluation frameworks used to validate their behavior. Companies building data quality infrastructure, synthetic data generation platforms, and AI evaluation suites are addressing foundational bottlenecks that will only grow more acute as enterprise AI deployment scales.
What We Look For in Infrastructure Investments
Our diligence process for AI infrastructure investments focuses on five key questions. These questions are different from what we ask for application layer companies, and understanding the distinction helps explain why operator background is particularly valuable for evaluating infrastructure businesses.
Who is the person at the customer who feels this pain? The best infrastructure businesses are built by founders who have a specific, named person at the target customer in mind -- not a generalized "AI engineer" persona but a concrete individual with a specific job title, a specific set of daily frustrations, and a specific budget to spend on solving them. When founders can describe the persona with that specificity, it is a signal that they have done the customer development work to understand the market deeply.
What does the data flywheel look like? We want to understand specifically how the product improves as it sees more usage. Not in a hand-wavy "network effects" sense, but concretely: what data does the product collect, how does that data inform product decisions or model training, and what is the specific mechanism by which more customers makes the product measurably better for all customers?
What is the migration story for the second customer? Infrastructure companies often land their first customer through founder relationships and sheer force of effort. The second customer is the test of whether there is a real market. We want to understand specifically what the sales motion looks like beyond the founder's personal network and how the company will build a repeatable path to enterprise adoption.
Where does this fit in the enterprise AI stack two years from now? The AI tooling landscape is evolving rapidly, and infrastructure products that seem essential today may be commoditized or absorbed into larger platforms in 24 months. We think carefully about where a product sits in relation to the major cloud providers' roadmaps and whether the founders have a credible plan for maintaining relevance as the platform landscape evolves.
What is the pricing model logic? Infrastructure businesses that price on usage create virtuous revenue dynamics -- as customers' AI workloads grow, revenue grows automatically without additional sales effort. We are skeptical of infrastructure businesses with pure seat-based pricing because it decouples revenue growth from the most important leading indicator of customer success, which is actual production usage.
Three Companies We Are Excited About
Without naming specific portfolio companies, we want to describe three archetypes of AI infrastructure investments that we find compelling at the current moment. Each represents a different part of the infrastructure stack, but all three share the characteristics we described above: clear data flywheels, genuine switching costs, and pricing models that align with customer value creation.
The first archetype is the production monitoring platform for AI systems -- a product that watches production AI workloads in real time, detects anomalous behavior, measures drift from evaluation benchmarks, and alerts engineering teams before small problems become catastrophic failures. The market for this product is every company that has deployed AI in production, which is increasingly every major enterprise. The data flywheel comes from observing behavioral patterns across many different production systems, which trains anomaly detection models that become more accurate over time.
The second archetype is the domain-specific fine-tuning platform -- a system that allows enterprises to customize foundation models on proprietary data without requiring the deep ML expertise that building this infrastructure from scratch would demand. As enterprises accumulate proprietary data assets and seek to leverage those assets for competitive advantage, the demand for accessible fine-tuning infrastructure will grow substantially. The switching cost comes from the fine-tuning workflows themselves, which become deeply embedded in data pipelines and product release processes.
The third archetype is the AI evaluation and testing framework -- a product that makes it possible for engineering teams to define custom metrics, build adversarial test suites, and validate AI behavior against business requirements before deploying to production. The evaluation problem is underappreciated as a standalone market, but every enterprise that has had an AI deployment fail in production understands viscerally why robust evaluation infrastructure matters. The pricing model naturally scales with the scope of testing, which scales with the number of AI systems a customer is managing.
The Investment Case, Summarized
We believe the AI infrastructure market will follow the same trajectory as cloud infrastructure and mobile development tooling: a long period of underappreciation by generalist investors, followed by a recognition that the infrastructure companies are generating more durable revenue, better unit economics, and more defensible competitive positions than many of the application layer companies that received more attention early on.
We are investing in this thesis actively. If you are building in AI infrastructure -- evaluation, orchestration, fine-tuning, data quality, or production monitoring -- we want to hear from you. Bring us a concrete problem that enterprise AI teams face, a credible theory about why your approach is better than what they would build themselves, and a founding team that has the technical depth to build something genuinely difficult. We will respond within five business days.
Marcus Chen is the Managing Partner of Orbit AI. He previously served as CEO of DataStream, which was acquired in 2018. He writes about AI infrastructure investing, enterprise AI adoption, and the operational challenges of scaling technology companies. This article represents his personal views and should not be construed as investment advice.
