Most pharmaceutical companies are treating AI like a software upgrade. Install the tools. Train the teams. Wait for productivity gains. AI-native pharma startups built something else entirely.
They rebuild drug discovery around artificial intelligence, not adding it to clinical innovation. The difference is structural, economic, and increasingly, existential for traditional players.
Consider the numbers. Recursion produces 136 optimized drug candidates annually at a cost-per-compound 10-50x lower than traditional pharma's 500-1,000 annual compounds.
Artificial intelligence (AI) is revolutionizing drug discovery and development, transforming the pharma industry by enabling unprecedented breakthroughs and optimizing every stage from molecular prediction to clinical trials.

Exhibit 1: The 7 Lessons to Learn from AI-native Pharma Start-Ups
The pharma industry is watching. Some are racing to transform. Others are cautiously experimenting. Most are still figuring out what “AI-native” actually means beyond the buzzwords.
In this article, we document 7 lessons and the 23 most promising AI-native start-ups reshaping the pharmaceutical sector, impacting drug discovery, clinical trials, and market growth worldwide.
Is being AI-native a goal for the biotech industry?
It should be. But not because AI is trendy.
AI-native architectures solve the core economic problem of drug discovery: massive capital requirements meeting abysmal success rates. Traditional drug development burns $2.5 billion per approved drug. Clinical trials fail 90% of the time. R&D timelines stretch past a decade.
AI-native companies are rewriting these equations. They’re achieving 80-90% Phase I success rates compared to 40-65% for traditional approaches. Development timelines compress from 10+ years to 3-6 years. The economics shift from “spray and pray” to targeted precision. AI-native approaches accelerate the development of treatments by enabling faster identification of promising candidates and reducing time to market.
AI-native approaches are more cost-effective, making the entire drug discovery and development process faster, less expensive, and more efficient.
But here’s the catch: you can’t get there by adding AI models to existing drug discovery processes. The transformation requires rethinking data infrastructure, talent composition, decision-making frameworks, and organizational design, impacting the entire drug discovery process.
Comparing AI-native vs traditional pharma operations
The operational differences reveal themselves in three dimensions: data, decisions, and timelines.
Data architecture: Traditional pharma generates data to answer specific questions. AI-native systems generate data to train models that answer thousands of questions. That’s data-first thinking: proprietary training sets as a competitive moat.
A distributed data infrastructure enables scalable data collection and efficient model training across both central and edge locations, supporting the comprehensive needs of AI-native pharma.
Decision velocity: Traditional drug development moves through committee consensus. AI-native operations use AI algorithms to make go/no-go calls in days, not months. Drug developers leverage these AI tools to accelerate decision-making and streamline the development process, enabling faster and more informed choices. Exscientia’s AI platform evaluates drug candidates against multiple parameters simultaneously, eliminating sequential bottlenecks that plague conventional approaches.
Integration depth: Traditional pharma runs AI as a separate function. AI-native companies weave computational and experimental work into closed loops. Design. Make. Test. Analyze. Repeat.
Relay Therapeutics integrates protein motion modeling directly into medicinal chemistry workflows, creating feedback cycles that accelerate optimization.
The difference between AI-enabled and AI-native architecture
AI-enabled means using AI tools within existing frameworks. AI-native means the framework itself is computational.
Think of it like digital transformation in media. AI-enabled is a newspaper with a website. AI-native is a platform built for digital distribution from day one. The underlying architecture is different.
In drug discovery, AI-enabled pharmaceutical companies use machine learning for target identification or compound screening. Some companies enhance specific stages of the value chain by adding an AI-based component or implementing AI-based solutions, enabling greater automation and efficiency. Valuable, but contained.
AI-native systems integrate AI technologies from target discovery through clinical trial design. The AI native architecture determines how data flows, how decisions get made, and how knowledge compounds over time.
Insilico Medicine’s Pharma.AI platform exemplifies this. AI is the operating system for drug discovery. Target identification, molecule generation, and clinical trial prediction, all connected through integrated AI models that learn from each stage.
Why bolting artificial intelligence onto legacy processes fails
Legacy pharmaceutical research evolved around human intuition and sequential experimentation. The data isn’t structured for AI consumption. Traditional labs generate results for human interpretation, not machine learning training. Retrofitting data pipelines is expensive and incomplete.
Effective AI model lifecycle management is crucial in this context to ensure that AI models remain effective and up-to-date within legacy systems. You get models trained on sparse, inconsistent data that can’t match purpose-built systems.
The organizational design resists computational thinking. When AI sits in a separate function, it becomes a service provider, not a core capability. Scientists request predictions. AI teams deliver outputs. Integration is shallow. Feedback loops are slow.
Traditional drug discovery processes optimize for regulatory compliance and risk management. In AI-driven drug development, drug safety requires robust safety protocols, transparency, and explainability to ensure compliance and manage risks effectively. AI-native approaches optimize for learning velocity and prediction accuracy. Trying to do both simultaneously creates organizational gridlock.
This is why companies launching “AI labs” often see disappointing results. The lab might be cutting-edge. But it’s plugged into a system designed for a different era.
Three archetypes of AI-native pharma today
The AI-native landscape has evolved into three distinct models, each with different risk profiles and economics.
Platform-first companies like Recursion and Insilico built end-to-end ai systems before selecting specific drug targets.
-
High upfront investment.
-
Long path to revenue.
-
But the platform becomes an asset that generates multiple programs.
Their bet: proprietary AI infrastructure creates lasting competitive advantage. These platforms can also offer AI services to external partners, turning their infrastructure into a foundation for innovation and enabling AI-as-a-Service (AIaaS) to support autonomous and intelligent network operations.
Modality specialists like Generate Biomedicines focus on specific therapeutic approaches. They go deep on one AI application rather than broad across drug discovery. For example, some modality specialists may focus on advanced therapeutic modalities such as cell therapies.
-
Lower initial capital requirements.
-
Faster path to clinical validation.
-
The risk: being category-dependent.
Hybrid adopters like Relay Therapeutics combine AI-native capabilities in specific domains (protein motion analysis) with more traditional approaches elsewhere. They’re not purely AI-native, but they’ve rebuilt core technical functions around computational methods. This model appeals to pharma companies with existing pipelines trying to transform selectively.
Each archetype works. Each faces different scaling challenges. But all three share fundamental principles that traditional pharma struggles to implement:
-
data as infrastructure,
-
algorithms as decision-makers, and
-
platforms as products.
Across these models, there is a strong emphasis on developing novel therapies through advanced technologies, leveraging AI and machine learning to create new treatment options.
23 AI pharma start-ups, and what they reveal about AI-native maturity
The AI-native maturity scale is a spectrum revealed through how deeply artificial intelligence penetrates a company’s operating model.

Exhibit 2: 23 Most Promising AI Pharma Start-Ups (view interactive version)
True AI native maturity emerges when companies architect their entire drug development process around machine learning and AI systems from inception. The most revealing examples come from startups founded after 2019, when foundation models and generative AI became viable for drug development.
Platform Leaders (End-to-End AI Integration):
-
Xaira Therapeutics (founded 2024) emerged with $1 billion in funding - the largest initial commitment in biotech history. Co-founded by Nobel laureate David Baker, Xaira combines machine learning, data generation, and therapeutic development into a unified platform, integrating protein design models like RFdiffusion with functional genomics and proteomics capabilities.
-
Isomorphic Labs (founded 2021, Alphabet spinout) raised $600M in early 2025, building on DeepMind’s Nobel Prize-winning AlphaFold technology. Their platform extends protein structure prediction to DNA, RNA, and small molecule interactions, with major partnerships at Eli Lilly and Novartis demonstrating pharma confidence in AI-first approaches.
-
Genesis Therapeutics (founded 2019, now Genesis Molecular AI) has raised over $300M and recently unveiled Pearl, a foundation model that outperforms AlphaFold 3 by up to 40% on key benchmarks. With partnerships at Eli Lilly, Genentech, and Incyte, Genesis exemplifies the shift from computational biology to true molecular AI, completing design-make-test cycles weekly. Their platform leverages advanced computational techniques for early drug discovery, enabling rapid identification and optimization of promising molecules in the initial phases of drug development.
Discovery Engines (Pipeline-Focused AI Innovation):
-
Insilico Medicine (founded 2014) achieved a major milestone as the first company to advance a fully AI-discovered and AI-designed drug (INS018_055 for idiopathic pulmonary fibrosis) into Phase 2 trials, reaching this stage in under 30 months. This drug has entered clinical trials, demonstrating the acceleration of AI-driven drug discovery into clinical phases. Their Pharma.AI suite integrates target discovery (PandaOmics), molecular generation (Chemistry42), and clinical trial prediction (InClinico).
-
Recursion Pharmaceuticals (founded 2013) combines phenomics with machine learning, recently acquiring Exscientia for $688M to create the most comprehensive AI drug discovery platform. Their approach analyzes cellular responses at a massive scale, with partnerships spanning Bayer, Roche, and Sanofi totaling ~$20B in potential economics.
Specialized Platform Innovators:
-
BigHat Biosciences (founded 2019) automated antibody design through AI-guided wet lab integration. Their platform combines machine learning predictions with high-throughput experimentation, creating closed-loop optimization that generates promising drug candidates in weeks rather than years.
-
Chai Discovery (founded recently) and other emerging players represent the post-AlphaFold generation, applying transformer architectures, the same AI systems behind ChatGPT, to protein and gene editing design. Early results suggest these biological foundation models can generate novel therapeutics with improved precision, representing the next wave of AI-native maturity.
AI-native architecture is becoming the default for new drug development ventures.
The maturity gap between these ai native pharma companies and established pharmaceutical industry players is expected to widen. These startups move from founding to clinical trials in 3-4 years. Traditional pharma takes 10+.
Lesson 1: Build proprietary data infrastructure from day one
Why Recursion generates 8 billion images before finding targets
Most pharma companies pick a disease, find a target, and then look for drugs. Recursion flips this entirely. They generate data first. Massive amounts of it.
Their AI native system captures eight billion cellular images showing how cells react to different interventions: blocking specific genes, adding compounds, mimicking disease states. Each image is training data for AI models.
This reverses the traditional approach. Instead of running experiments to answer specific questions, they create comprehensive datasets that let AI drug discovery systems discover patterns humans would never spot.
The AI models identify drug candidates by recognizing cellular signatures that indicate disease reversal. Identifying and characterizing target proteins is a crucial part of this process, as understanding the role of target proteins enables more effective target validation and structure-based drug discovery.
This only works because the entire data infrastructure was built for machine learning from the start. Traditional pharmaceutical companies generate far less data, and it wasn’t designed for ai technologies. You can’t retrofit decades of lab notebooks into effective training sets.
The economics of owning your training data
Proprietary data creates a competitive advantage that’s nearly impossible to replicate.
AI models themselves? Increasingly commoditized. Open-source frameworks are free. Computational talent is expensive but accessible. What competitors can’t copy is your unique, high-quality biological data generated at scale.
Insilico Medicine didn’t just license public datasets when building its AI drug discovery platform. They generated proprietary data across thousands of disease states. The AI models trained on that data make predictions no competitor can reproduce. These predictions enable the identification and prioritization of potential drug candidates that others cannot find. The predictions aren’t just accurate—they’re distinctive.
Companies licensing third-party AI platforms get the same predictions as everyone else. Same AI native system plus same public data equals same drug candidates. That’s why Recursion’s $150M partnership with Roche wasn’t about AI models. It was about accessing proprietary biological data and the systems trained on it.
Multimodal data integration as a competitive moat
AI-native companies don’t just generate more data. They generate different types designed to work together.
Insitro combines imaging data, gene expression profiles, protein measurements, molecular structures, and patient outcomes in integrated AI models. The AI native system learns how molecular changes connect to cellular behavior and clinical results. By representing and analyzing molecular structures alongside other data types, these models can uncover relationships critical for drug discovery and property prediction. That integration reveals insights impossible from any single data source.
Traditional pharmaceutical companies store different data types in separate systems built decades apart. Connecting them requires massive engineering work that AI-native startups avoided by designing integration from the beginning.
The advantage compounds. As AI models train on more proprietary data, predictions improve. Better predictions lead to better experiments. Better experiments generate more valuable data. The cycle accelerates.
The data infrastructure isn’t overhead. It’s the product that makes drug discovery platform economics work.
Lesson 2: Platform thinking beats single-asset bets
How Insilico's Pharma.AI platform scales across indications
Insilico Medicine built an ai native system to discover hundreds of drugs.
Their Pharma.AI platform connects three modules: PandaOmics finds targets, Chemistry42 designs molecules, and InClinico predicts clinical outcomes. Each module feeds the next. More importantly, each gets smarter as it processes more programs.
When Insilico identified a novel target for lung fibrosis and designed a clinical candidate in 18 months, they weren’t starting from scratch. The AI models had already learned from dozens of programs across different diseases, including autoimmune diseases. Insights from cancer programs improved predictions for fibrosis. Knowledge compounds across therapeutic areas.
Traditional drug discovery treats each program independently. Teams start fresh every time. AI-native platforms flip this. The AI native system learns from everything, making each subsequent program faster and cheaper.
This is why platform economics favor AI drug discovery at scale. Recursion can evaluate new disease hypotheses at a fraction of their initial costs because the infrastructure and trained models already exist.
Reusable models vs one-off predictions
Most pharmaceutical companies use AI for isolated questions. Will this compound bind? What's the toxicity? Valuable, but these one-off queries don't build lasting capability.
AI-native platforms build reusable models that improve with use. Generate Biomedicines' protein design platform isn't trained to predict one antibody; it learns fundamental principles of protein folding that apply across thousands of programs. Each new drug candidate generates training data that improves future predictions.
One-off predictions treat AI as a tool. Platform thinking treats AI as infrastructure that accumulates value. The AI native system becomes the asset, not individual predictions.
This explains why drug development platform companies attract different valuations than single-asset biotechs. Investors bet on systems that generate multiple successful programs with improving economics over time.
When platform economics justify the upfront investment
Building an AI platform requires massive upfront investment. Data infrastructure. Computational resources. Specialized talent. Years before the first drug candidate reaches clinical trials.
Traditional pharmaceutical industry thinking struggles with this. Management wants near-term milestones. Platform development feels like expensive infrastructure delaying revenue.
AI-native companies make a different bet. They accept extended timelines and higher initial costs because platform economics become favorable at scale.
Once operational, incremental programs cost a fraction of traditional drug discovery. In particular, AI-driven methods are transforming small molecule drug discovery by enabling rapid identification and optimization of new therapeutic compounds, significantly improving efficiency and reducing costs in early drug development.
The break-even point arrives when platform leverage exceeds infrastructure investment. Recursion’s first 10-15 programs cost more per program than traditional approaches. But programs 50-100 cost dramatically less while moving faster through drug development cycles.
When Sanofi paid Insilico $1.2B, they weren’t licensing AI models for one program. They were accessing platform economics that would take years and hundreds of millions to replicate internally.
The pharmaceutical industry is recognizing that AI-native platforms aren’t just faster tools. They’re different business models. Companies optimizing for single-asset efficiency will struggle against competitors optimizing for platform leverage.
Lesson 3: Seamless computational-experimental integration
Relay Therapeutics and the closed-loop DMTA cycle
Relay Therapeutics recognized something fundamental: proteins aren’t frozen structures. They’re dynamic molecules that shift between shapes, revealing drug binding sites that static analysis misses.
Their platform runs tight computational-experimental loops. Design. Make. Test. Analyze. DMTA. Repeat.
Computational models predict which molecules will bind specific protein shapes. Analyzing and optimizing chemical structures is central to this process, as detailed molecular characterization enables more accurate predictions and efficient compound optimization in the DMTA cycle. Chemists synthesize them. Assays test binding. Results immediately update the AI models for the next round.
This isn’t sequential handoffs between teams. It’s continuous integration where experimental data updates AI models in real-time, and refreshed models guide the next synthesis immediately.
Traditional pharmaceutical workflows run these DMTA cycles in months. Relay’s integrated AI native architecture compresses them to weeks or days. That tempo shift changes what’s possible in lead optimization.
The key is architectural. Computational and experimental work aren’t separate departments coordinating through meetings. They’re unified workflows where data flows automatically between prediction and validation.
Automated lab systems that feed algorithms in real-time
BigHat Biosciences took this further: robotic labs directly coupled to machine learning.
Their platform designs antibody variants computationally, synthesizes them robotically, tests binding automatically, and feeds results back to AI models without human data transfer. The AI native system operates as a closed loop where experimental outcomes become training data within hours. This integration allows AI and robotics to rapidly identify promising drug candidates and optimal synthesis routes, accelerating the discovery process.
This requires purpose-built infrastructure. Lab automation must generate machine-readable data in formats AI models consume directly. Quality control is algorithmic. Data pipelines handle high-throughput results continuously.
Traditional pharmaceutical companies have computational and experimental groups as separate functions with separate systems. Data moves between them through manual exports and formatting. That friction breaks the closed-loop dynamics that make AI drug discovery systems learn efficiently.
Valo Health’s Opal platform eliminated this friction. Data from automated experiments flows directly into the AI models training on it. No handoff. No reformatting. The system learns from its own results continuously.
Breaking down the wet lab-dry lab divide
The traditional pharmaceutical industry separates computational and experimental scientists. Different training. Different careers. Different hierarchies.
AI-native companies blur these boundaries deliberately. Computational biologists work alongside bench scientists on shared problems. Both understand enough of each other’s work to collaborate directly.
Insitro exemplifies this. Their teams combine machine learning expertise with deep disease biology knowledge. Scientists design experiments specifically to generate better AI training data. They engineer assay systems optimized for machine learning, not just run standard protocols.
Cultural integration matters as much as technical integration. When scientists share objectives, they optimize for system performance rather than departmental efficiency. Integrated teams are especially well positioned to tackle complex diseases, where addressing challenges like Alzheimer's and cancer requires close collaboration between computational and experimental approaches. The goal is the tightest feedback loops between them.
Traditional pharma companies face deep organizational resistance here. Career structures and incentives reinforce the wet lab-dry lab divide. AI-native startups avoided this by hiring for collaboration from day one.
The lesson: seamless integration isn’t primarily a technology challenge. It’s organizational design requires rethinking team structures and cultural norms from first principles.
Lesson 4: Hire for collaboration, not siloes
Talent composition at AI-native companies: 40-60% computational
Traditional pharma R&D: 5-10% computational scientists supporting 90% bench scientists. AI-native companies flip this: 40-60% computational talent working alongside experimental biologists.
This isn't replacing scientists with algorithms. It's different operating models requiring different skill mixes.
Recursion employs computational biologists, data engineers, machine learning specialists, and software developers in roughly equal numbers to wet lab scientists. Generating massive phenotypic datasets requires as much computational infrastructure as experimental capacity.
Insilico Medicine skews even more computational because its drug discovery starts in silico. Target identification, molecule generation, and property prediction happen computationally before synthesis.
Traditional pharmaceutical companies can't match this without wholesale reorganization. Adding computational talent to existing structures creates service functions, not integrated teams. AI-native startups designed organizations around computational-experimental fusion from day one.
Cross-training wet lab scientists in data literacy
Generate Biomedicines cross-trains computational and experimental scientists. When experimental scientists understand how machine learning models train, they design experiments to generate better training data. When computational scientists understand assay limitations, they build more realistic predictive models.
Valo Health teaches wet lab scientists Python and statistical modeling to enable direct collaboration without translation layers.
The pharmaceutical industry maintains rigid role boundaries. Collaboration happens through formal requests. Information loss occurs at every handoff.
AI-native companies build overlapping expertise. Scientists need enough literacy to collaborate directly, not full expertise in both domains.
Why software engineers belong in pharma R&D
At Insitro, software engineers are embedded in research teams, treating biological questions as engineering problems. How do we process terabytes of imaging data? How do we version control experimental protocols?
Modern drug discovery is increasingly a software problem. Managing complex biological data and building human-in-the-loop AI systems requires software engineering expertise.
Traditional pharma treats software as infrastructure supporting research. AI-native companies treat software as research itself. When engineers understand biological objectives and collaborate directly with scientists, they build better tools.
Relay Therapeutics integrates software engineers into medicinal chemistry teams. Engineers build custom tools for specific challenges, iterating based on direct feedback.
The lesson: AI-native success requires reimagining pharma R&D talent. It's computational biologists, data engineers, machine learning specialists, and software developers working as integrated teams. Companies maintaining traditional talent boundaries will struggle against organizations designed around computational-experimental collaboration.
Lesson 5: Algorithmic decision-making enables 10x velocity
How AI systems make go/no-go calls at Exscientia
Exscientia pioneered something radical: letting algorithms make critical drug development decisions that traditionally require committee approval.
Their artificial intelligence platform evaluates compound candidates against multiple parameters simultaneously, including binding affinity, synthesis feasibility, toxicity risk, IP constraints, and timeline. The system generates go/no-go recommendations in hours with defined confidence thresholds. Not suggestions for debate. Actionable decisions.
When compounds meet predefined criteria, programs advance. When they don't, teams pivot immediately. No month-long review cycles. No consensus-building.
Traditional pharmaceutical companies can't operate this way because their frameworks assume human judgment trumps computational prediction.
When AI algorithms and senior scientists disagree, scientists prevail. The rationale when AI is unreliable. A bottleneck occurs when predictive models achieve higher accuracy than human intuition.
Reducing committee cycles from months to days
The velocity difference compounds across drug development programs. Traditional pharma runs design cycles through weekly team meetings and monthly steering committees. Each decision point adds weeks, even when the analysis takes hours.
Generate Biomedicines compressed this by embedding decision logic directly into their generative AI workflows. The system designs protein variants, evaluates them against requirements, and prioritizes candidates for synthesis, all computationally. Scientists review recommendations, but the default is approval unless they identify specific concerns.
This inverts the traditional model where every advancement requires explicit approval. Scientists move from gatekeepers to exception handlers, focusing on edge cases where AI techniques haven't established reliable patterns.
The cultural shift from consensus to data-driven authority
Pharmaceutical development evolved around collective decision-making because individual scientists couldn't hold all the relevant expertise. Chemists, biologists, toxicologists, and clinicians each contributed knowledge. Consensus emerged through discussion and compromise.
Artificial intelligence changes this. The AI native system integrates expertise across domains simultaneously, evaluating all parameters in parallel using predictive models trained on thousands of programs.
But this threatens established power structures. Senior scientists built careers on judgment and experience. When algorithms make better predictions, what's their role? When data-driven authority replaces consensus, who holds influence?
AI-native companies resolved this by hiring scientists who embrace computational decision support from day one. They built the right culture from the founding. Scientists expect algorithms to guide decisions. Their role is improving system performance, not competing with it.
Traditional pharma faces genuine organizational barriers. You can't easily shift from consensus-driven to algorithm-driven decision-making without disrupting career paths, power dynamics, and cultural norms decades in the making.
Lesson 6: Capital efficiency through asset-light operations
Why AI-native companies generate 136 optimized drug candidatesper year
Recursion produces 136 optimized drug candidates annually at a cost-per-compound 10-50x lower than traditional pharma's 500-1,000 annual compounds.
The efficiency comes from computational screening of millions of structures before synthesis.
Traditional drug design requires human chemists to propose structures, evaluate feasibility, and prioritize synthesis. Intellectually demanding work that doesn't scale with headcount.
AI-native companies use generative AI to explore chemical space computationally at scale. Algorithms generate thousands of candidate structures, evaluate them against multiple criteria using predictive models, and prioritize the most promising. Scientists focus on synthesis and testing, not ideation.
This changes capital intensity fundamentally. Small teams validate computationally-selected candidates from vast chemical space quickly. Cost per compound explored drops by orders of magnitude.
Outsourced manufacturing and the virtual pharma model
AI-native startups take asset-light strategy further: outsource manufacturing entirely.
Traditional pharmaceutical companies invest billions in manufacturing facilities years before knowing which drug candidates will succeed in clinical trials. Fixed costs must be recovered whether programs succeed or fail. Capital-intensive and risky.
Valo Health and similar ai native companies own no manufacturing assets. They design new drugs computationally, validate experimentally in small labs, then contract manufacturing to specialized CDMOs. Capital goes into AI infrastructure and talent, not facilities.
This works at scale with reliable predictive models. Poor computational predictions waste money on contract manufacturing for failed candidates. But when AI techniques achieve high prediction accuracy, outsourced manufacturing becomes efficient.
Productivity metrics that traditional pharma can't match
The efficiency gap spans multiple dimensions.
Cost per compound explored: AI-native companies spend orders of magnitude less. Computational exploration is cheaper than physical synthesis and testing.
Timeline from target to clinical candidate: 18 months versus 4-6 years. Computational screening eliminates dead ends before expensive experimental work.
Clinical trial success rates: Early data suggest AI-designed molecules achieve higher Phase I success through better computational prediction of toxicity and ADMET properties.
Capital efficiency: AI-native companies advance more programs per dollar because their cost structure skews toward scalable computation rather than fixed lab infrastructure.
Traditional pharma can't match these metrics without architectural transformation. Adding AI tools to existing workflows produces incremental gains. AI-native operations produce exponential advantages through fundamentally different capital allocation and operational models.
Lesson 7: Strategic partnerships as growth accelerators
Recursion-Roche and Insilico-Sanofi deal structures
Recursion’s partnership with Roche, structured at $150M upfront, reveals how pharmaceutical companies value AI native systems. Roche didn’t license individual drug candidates. They bought access to Recursion’s entire platform: the data infrastructure, trained AI models, and discovery capabilities that generate multiple programs.
Insilico’s Sanofi deal went further: $1.2B in total value across target identification, molecule generation, and clinical development. The economics reflect platform leverage. Sanofi pays for access to AI techniques that compress drug development timelines across its entire pipeline, not just one therapeutic area. Such partnerships have enabled progress in areas like chronic kidney disease, where AI-driven drug discovery is accelerating the development of new treatments and optimizing clinical trials.
These deal structures differ fundamentally from traditional biotech partnerships. Instead of paying for a single asset with milestone payments tied to clinical trial success, big pharma is paying for computational infrastructure that produces multiple assets with improving economics over time.
When to license platforms vs co-develop assets
AI-native companies face a strategic choice: license their platform broadly or co-develop specific assets with partners.
Generate Biomedicines pursues selective partnerships where it retains equity in co-developed programs. They license platform access for partner-selected targets but maintain ownership of their generative AI technology. This preserves optionality. They can continue building internal pipelines while capturing value from partner programs.
Absci took a different approach: broad platform partnerships where partners use Absci’s ai native system for their own programs. Absci earns platform fees and milestones but doesn’t own the resulting drug candidates. This scales revenue faster but captures less upside from clinical success.
/Still%20images/External%20Webpage%20Mockups%202025/ideation-collect-external-ideas-2025.webp?width=2220&height=1410&name=ideation-collect-external-ideas-2025.webp)
Exhibit 3: Sync of partner programs with ITONICS platform
The choice depends on the capital strategy. Platform licensing generates near-term revenue but limits value capture. Co-development maintains upside but requires more capital to advance programs through clinical trials. Co-development strategies have been particularly impactful in therapeutic areas such as solid tumors, where AI-driven approaches are accelerating clinical trial design and targeted therapy development.
How AI-native startups avoid becoming feature companies
The risk: becoming a tool vendor rather than a drug company.
Atomwise avoided this by maintaining internal pipeline development alongside platform partnerships. They prove their AI drug discovery capabilities through proprietary programs, then partner on additional indications. The internal pipeline demonstrates platform value and prevents perception as merely a software vendor.
Relay Therapeutics took the opposite approach: focus entirely on internal drug development, using its computational platform as a competitive advantage rather than a product. This strategy has enabled the development of innovative solutions for infectious diseases, leveraging AI to accelerate therapeutic discovery in this area. They’re a pharma company that happens to use advanced AI native architecture, not an AI company selling to pharma.
Applying AI-native principles to established organizations
Quick wins: where to start without full transformation
Traditional pharmaceutical companies don’t need to rebuild everything to capture AI value. Start with contained projects where AI native principles can demonstrate impact without threatening core operations.

Exhibit 4: The Time Effect of AI-Native Drug Development
Target identification offers the cleanest entry point. Companies like AstraZeneca partnered with BenevolentAI to apply AI techniques to existing biological data, identifying novel drug targets computationally before committing experimental resources. The AI native system operates in parallel to traditional research, reducing political friction while building credibility through results.
Lead optimization presents another opportunity. AI-driven molecular design can accelerate existing chemistry programs without reorganizing teams. Novartis integrated computational chemistry AI models into medicinal chemistry workflows, generating predicted compounds that chemists validate experimentally.
These approaches have accelerated the development of new options for cancer treatment, demonstrating how AI can drive innovation in healthcare. The hybrid approach respects existing expertise while introducing computational leverage.
The hybrid model: selective AI-native pockets
Full transformation isn't necessary. Strategic AI-native pockets within traditional structures can deliver a competitive advantage.
Pfizer created computational biology centers of excellence that operate with AI native maturity while the broader organization maintains traditional R&D workflows. These groups function as internal startups, hiring computational talent, building proprietary data infrastructure, and using algorithmic decision-making for their programs. They prove what's possible without forcing organization-wide change.

Exhibit 5: R&D Organization Structures
The hybrid model works when AI-native pockets have genuine autonomy. They need independent budgets, hiring authority, and decision-making frameworks. Otherwise, they get absorbed into legacy processes and lose the architectural advantages that make AI-native approaches effective.
Change management for computational-first transitions
The hardest barriers are cultural, not technical.
Successful transitions require executive sponsorship that protects AI-native initiatives from organizational antibodies. When computational predictions conflict with senior scientists' intuition, leadership must establish clear authority frameworks. Data-driven decisions need institutional backing to override consensus-based traditions.
Talent strategy matters equally. Hiring computational scientists into traditional structures without changing team composition ratios produces service functions, not integrated capabilities. Aim for 20-30% computational talent in pilot programs. This is enough to shift team dynamics and decision-making patterns.
Start with volunteers. Scientists who embrace computational approaches become advocates. Forcing AI adoption on skeptical teams guarantees resistance. Build momentum through early adopters, demonstrate results, then scale. Having a clear stakeholder map informs about what actions you need to take.

Exhibit 6: Stakeholder Map To Explore Need For Action (download here)
How development management software supports AI-native R&D
Tracking signals across computational and experimental workflows
AI-native drug discovery generates signals across disparate systems: computational predictions from AI models, experimental validation from lab automation, biological data from high-throughput screening, and clinical insights from patient data. Data from human cells, such as mitochondrial health markers, is also integrated and tracked within these workflows to ensure comprehensive signal management.
Managing these signals requires infrastructure that traditional R&D systems weren’t designed to handle.
Development management platforms bridge this gap by aggregating signals from computational and experimental sources into unified workflows.
When an AI native system identifies a promising drug target computationally, the signal flows into portfolio planning alongside experimentally-derived opportunities. Teams evaluate both using consistent criteria rather than treating computational insights as separate from traditional research.
/Still%20images/Element%20Mockups%202025/portfolio-receive-risk-alerts-2025.webp?width=2160&height=1350&name=portfolio-receive-risk-alerts-2025.webp)
Exhibit 7: Alert on interest increase inside ITONICS on autonomous networks
ITONICS supports this integration by connecting AI techniques to strategic foresight. R&D teams track emerging biological targets, competitive AI drug discovery activities, and technology trends that might accelerate specific programs. The platform surfaces connections between computational capabilities and therapeutic opportunities that siloed systems miss.
Portfolio management for platform-based drug discovery
Platform-based models require different portfolio logic than single-asset development.
/Still%20images/Kanban%20Board%20Mockups%202025/portfolio-govern-project-boards-2025.webp?width=2160&height=1350&name=portfolio-govern-project-boards-2025.webp)
Exhibit 8: Managing development portfolios inside ITONICS
Traditional pharma manages discrete programs independently. AI-native companies optimize portfolios around platform leverage: Which programs generate the most valuable training data? Which therapeutic areas benefit most from existing AI models? Which partnerships accelerate platform maturity fastest?
Development management software enables this portfolio view by tracking relationships between programs and underlying platform capabilities. When one program generates breakthrough biological data, the system identifies other programs that could benefit from that data. When AI native system performance improves in one indication, portfolio managers see implications across the pipeline.
Connecting AI predictions to strategic decision-making
The gap between AI prediction and strategic action kills most AI adoption efforts.
Computational teams generate predictions. Strategy teams make decisions. Without a connection between these functions, predictions sit unused in technical reports. Development management platforms close this loop by embedding AI-generated insights into decision workflows that executives actually use.

Exhibit 9: ITONICS Prism highlighting projects with missing strategic fit
When predictive models identify novel drug targets, those insights appear in strategic planning cycles alongside market analysis and competitive intelligence. Decision-makers evaluate computational predictions using the same frameworks they apply to traditional opportunities. The AI techniques become inputs to strategy, not separate technical exercises.
ITONICS enables this by treating AI-generated signals as first-class strategic inputs. Computational predictions flow into the same innovation pipeline as scientific insights, partnership opportunities, and market trends. The platform connects technical capability to business opportunity, ensuring AI predictions drive decisions rather than gathering dust in technical silos.
FAQs: AI-native pharma start-ups
What exactly does "AI-native" mean in drug discovery?
AI-native means the entire drug development process is architected around artificial intelligence from inception, not retrofitted with AI tools.
These companies generate proprietary biological data specifically designed to train AI models, use algorithmic decision-making as the default, and integrate computational and experimental workflows into closed loops.
The AI native system isn't a tool that assists research; it's the foundation that determines how research gets done.
Which companies are considered truly AI-native?
Recursion Pharmaceuticals, Insilico Medicine, and Generate Biomedicines represent the most mature examples.
Recursion built phenomics-scale data generation feeding computer vision ai models. Insilico created end-to-end platforms spanning target discovery through clinical trial design.
Generate Biomedicines uses generative AI to design novel proteins computationally. Other notable examples include Relay Therapeutics (protein dynamics), Absci (antibody generation), Valo Health (integrated platforms), and newer entrants like Profluent and BigHat Biosciences.
How much faster are AI-native companies than traditional pharma?
AI-native companies compress drug discovery timelines from 10+ years to 3-6 years. Insilico identified a novel target and designed a clinical candidate in 18 months - a process that traditionally takes 4-6 years.
Recursion generates 136 optimized compounds annually versus 2,500-5,000 over five years for traditional pharma. In clinical trials, AI-designed molecules show 80-90% Phase I success rates compared to 40-65% for traditionally discovered drugs, reducing costly late-stage failures.
Can traditional pharma companies become AI-native?
Partially, but a full transformation is structurally difficult. Established pharmaceutical companies can create AI-native pockets, autonomous groups with computational-first workflows, algorithmic decision-making, and purpose-built data infrastructure. But their legacy systems, organizational hierarchies, and cultural norms resist wholesale transformation.
The data wasn't generated for machine learning. Career paths reward traditional expertise. Decision frameworks assume human consensus. Most traditional pharma will pursue hybrid models: AI-native capabilities for new programs while legacy operations continue for existing pipelines.
What metrics do AI-native pharma companies track?
Beyond traditional drug development metrics, AI-native companies measure:
- prediction accuracy of AI models across different biological tasks
- data generation throughput and quality
- computational-experimental cycle time (DMTA velocity)
- cost per compound evaluated (computationally versus experimentally)
- platform reusability (how many programs benefit from each platform improvement), and
- clinical trial success rates for AI-designed versus traditionally designed molecules.
They optimize for learning velocity, how quickly the AI native system improves predictions, not just individual program milestones.