To most people, the term “techbio” simply encompasses using cutting-edge algorithms and data toward biology. But it should be a call to action to adopt successful paradigms from the tech industry into biotech—strategies like “move fast and break things,” modern organizational structures, and most importantly here, the proliferation of APIs that let others programmatically access products and services.
Nearly every business, software, and app you use every day runs on APIs, or Application Programming Interfaces – the programmatic definitions that allow different servers and systems to communicate with each other. Need to find flight status? There’s an API for that – Kayak.com uses them. Want to start a brick-and-mortar shop? You can easily get a POS and inventory management system that uses APIs to process payments. But in biotech, these API bridges are notably missing.
Techbio startups tout automation and AI-driven labs, yet the lack of accessible, commercial REST APIs is holding the field back. Without standard APIs to connect lab services, the grand vision of automated, lab-in-the-loop workflows and rapid product development remains out of reach.
Everything Else Has APIs
In most digital sectors, APIs are the norm. A REST API (Representational State Transfer) is a standard web interface that lets different software systems exchange requests and data – think of it as a universal adapter – and they account for approximately 80% of traffic on the web. Virtually every online service today exposes some API endpoints (URLs) that developers can call, usually sending JSON data and receiving JSON or XML in return. This simple concept transformed industries:
- Travel: Platforms like Expedia, Booking.com, and airlines built extensive APIs for searching and booking trips. Instead of manually checking each airline, travel aggregators use APIs to pull real-time flight and hotel info. For instance, Expedia’s “Rapid API” provides access to 250,000 locations and 700,000 lodging options programmatically. This API provides all the travel related data needed to complete a booking: from region mappings to rates.
- Finance: Fintech runs on APIs. Payments, banking data, trading – all accessible via code. Stripe famously launched in 2011 with a clean API that let developers accept credit cards with a few lines of code, sparking e-commerce booms. Aggregators like Plaid provide a single API that connects to over 12,000 financial institutions so apps like Mint.com can securely link your bank accounts. The result? A wave of new fintech products (budgeting apps, robo-advisors, neobanks) built by small teams plugging into big-bank capabilities via REST calls.
- E-commerce & Cloud: Need to add maps or messaging to your app? Use the Google Maps API or Twilio’s SMS API. Need server infrastructure? Don’t rack servers – call AWS’s API to spin up cloud instances. In fact, Jeff Bezos famously issued an internal mandate at Amazon in the early 2000s, stating, “everything must be built as an API” and “all teams will henceforth expose their data and functionality” through them. This API-first culture birthed Amazon Web Services (AWS) and the cloud computing revolution. Today APIs drive trillions in market value, essentially becoming the operating system of modern cloud services. Many of the most valuable companies in the world consider APIs to be a core part of accessing their products, such as Amazon, Google, OpenAI, and Microsoft.
Across these sectors, APIs unlocked automation and integration. They enable one service to automatically trigger another – a payment API confirms a purchase, which calls a shipping API to send your product, which calls a notification API to text you the tracking link. Crucially, they empower independent developers: a small startup can build on top of industry giants by simply consuming their public APIs. This composability has been an engine of innovation in these other sectors.
Yet, when we turn to biotech and life sciences, this API-rich landscape suddenly evaporates. In biotech, the idea of stitching together experiments and analyses from multiple providers with a few API calls is mostly fantasy. Why is an industry rooted in advanced science still stuck in a pre-API era?
Biotech’s Closed Loop
Biotechnology today feels like a closed loop or walled garden. The core tasks – designing a molecule, getting it synthesized, testing it in a lab assay, analyzing the data – are often done within isolated systems, usually by large companies or all-in-one platforms that do not expose simple APIs for outsiders.
The number of biotech companies offering public, self-serve REST APIs can be counted on two hands. If you’re a developer wanting to programmatically order a DNA sequence synthesis, you can use Twist Bioscience’s APIs for ordering or codon harmonization, but other than that your options are slim to none. If you want to programmatically request a custom assay, you have no options that I’m aware of. Contrast this with ordering a book via Amazon’s API or querying flight data, and it’s obvious that biotech services just aren’t readily accessible in the same way.
Even large synthetic biology companies haven’t widely opened their capabilities as on-demand APIs. For instance, Ginkgo Bioworks – often described as an “organism engineering platform” – recently began offering APIs to their AI models – e.g. a protein foundation model API that can generate sequences or compute embeddings using SOTA models – but you can’t directly tap into Ginkgo’s foundry with an API call. The norm in biotech is bespoke, one-off integrations or manual handoffs, not open web services.
What’s missing
Because of this, whole categories of functionality that could be API-driven remain unavailable. Imagine if we had:
- Synthesis-to-Assay APIs: a unified service where you send a molecule or DNA sequence and the provider handles making it and running a specified assay, returning data under the same experiment ID later. Today, you must separately order synthesis (often via a web form or email), wait for the product, then send it into an assay (often in-house or via a CRO, again through emails or portals). No commercial “experiment execution API” ties these steps together.
- Model Fine-Tuning APIs: In the ML world, services let you fine-tune models via API (e.g. OpenAI fine-tuning, which is used for many AI-driven apps). In biotech, if you have experimental data and want to improve a prediction model, there’s no easy API service to hand it off to. You likely have to hire data scientists or use research-grade tools – a barrier for rapid iteration.
- Lab Orchestration APIs: A few companies are building “cloud labs” (remote-controlled automated labs) which do have APIs or scripting interfaces, but they are the exception and often require complex setup. For the broader biotech lab universe – the instruments, LIMS, and robots in labs around the world – there’s a lack of standardized APIs to schedule runs, control devices, or aggregate results. Integrating a new lab instrument with a software tool often means custom coding against a vendor’s SDK or even manually exporting CSV files. This is equivalent to the pre-API days in finance when one had to physically go to a bank or clearinghouse to get data.
The current state is that techbio remains unintegrated, and has yet to adopt some key principles from tech. Independent product developers are hampered because there’s no plug-and-play way to incorporate third-party biotech functions. If a bioinformatics startup or an academic lab wants to automate wet-lab validation of their in silico designs, they can’t write a script to do it – they must either build their own lab or engage in lengthy partnerships. This severely limits the pace of innovation and the emergence of a true ecosystem of interoperable bio tools.
The Exceptions
In addition to BioLM, some companies are waking up to this gap, hinting at solutions in the future:
- Benchling – Data Platform API: Benchling, a popular life-science data management platform, is one of the few that embraced an open developer platform. They provide REST APIs, webhooks, and an SDK so that labs can programmatically access and edit their data, integrate Benchling with instruments, and keep systems in sync. This means if you use Benchling LIMS, you at least have an API to pull or push experimental data, automating your record-keeping or analysis workflows. It’s a step toward integration – but notably, Benchling’s API is about data management after the experiment, so this is only a partial solution.
- Twist Bioscience – DNA Ordering API: Twist, a DNA synthesis provider, recognized the need for programmatic ordering. The Twist API (TAPI) lets customers integrate DNA ordering into their own LIMS or procurement systems. The Twist API enables you to do “anything possible on our eCommerce platform” programmatically – essentially, automated DNA synthesis orders. This is a great example of a biotech company offering an API akin to an e-commerce checkout. Order DNA with a
POST
request, not an email or manual purchase order. It streamlines R&D workflows and allows ordering hundreds of constructs without human error from copy-pasting sequences. - Aclid Bio – Compliance as API: Aclid is tackling the challenge of biosecurity screening DNA orders in an API-like way. Gene synthesis companies traditionally had to manually check orders against regulations – a slow, error-prone process. Aclid built a platform that automates license checks and screens DNA sequences in real-time, plugging into the ordering process programmatically. While Aclid’s offering is more a backend service than a public API, it embodies the ideal: automated, instantaneous checks via software instead of human workflows. It hints that every layer, even compliance, can be accelerated with a well-defined interface.
- Adaptyv Bio – Experiments via API: Perhaps the clearest sign of what’s possible is Adaptyv Bio’s newly announced – but not yet generally available – API. Adaptyv operates an automated wetlab for various protein assays, and in late 2024 they opened up a beta API for external developers. “Our API allows you to programmatically create experimental campaigns, get experiment updates and query lab results,” the company wrote. In other words, a researcher or an AI algorithm can order an experiment via code – you
POST
your protein design to Adaptyv’s API, and their robots do the rest, serving the data weeks later. Adaptyv even frames it as “Experiments-as-code” with features like webhooks for status updates and result integration. This is lab-in-the-loop automation incarnate: their AI customers are already using it to run closed-loop optimization, where an AI designs proteins, tests them via Adaptyv’s lab, learns from results, and repeats – all autonomously. Adaptyv’s approach is a major leap, but as a startup it currently covers only specific experiment types. The broader point is that this validates the model: expose lab operations via clean APIs, and you enable end-to-end automation that was previously theoretical.
Despite these examples, they are the outliers. Biotech as a whole remains a patchwork of closed systems. And the closed loop mentality – where companies try to own the entire process from idea to result – is potentially financially rewarding, but it means there are few opportunities for outsiders to innovate on top. It’s as if in the travel industry every airline insisted customers only book through their own website – we’d never have Kayak or Google Flights. Similarly, without open APIs, we’ll never have the “Kayak of biotech” or the “AWS of wetlabs.” Techbio will continue to fail to live up to its promise unless we break these silos.
The Stack We Actually Need
To truly unlock software-driven biology, biotech needs an API-first ecosystem – a stack of interoperable services that developers, startups, or even automated AI agents can mix-and-match to build new workflows. What might this look like? Here’s a vision for a future techbio stack built on APIs:
- Design APIs (In Silico): High-level services for computational biology tasks. This includes AI model APIs for sequence design, experimental data finetuning, structure prediction, annotation, and more. In fact, this layer is already emerging. For example, BioLM’s own API provides real-time REST endpoints for DNA and protein language models – you can send a sequence and get back predictions, embeddings, structures, annotations, without setting up any GPU servers. BioLM offers multiple model endpoints that handle tasks from folding to toxin prediction. A typical request might look like sending a JSON with a protein sequence to an endpoint for predicting its function: and the API would return a structured JSON response with the model’s predictions, e.g. GO predictions and likelihoods. The key is that predictive information is accessible and queryable through a uniform API interface, much like with Google or OpenAI APIs, allowing other developers to easily build their own pipelines and services on top.
POST /api/v3/proteinfer-go/predict/ HTTP/1.1 Host: biolm.ai Content-Type: application/json { "items": [{"seq": "MNDEL"}]}
- Make & Test APIs (Wet Lab): This is the critical missing layer. It comprises services for physical operations: “Cloud labs” accessible via REST. Imagine APIs for ordering DNA or peptides (some exist, like Twist’s), ordering custom reagents or kits, scheduling a cell culture or a high-throughput screen, even running a CRISPR edit in a cell line. Each would have standardized endpoints. For example, a “Synthesize DNA” API might accept a sequence and return an order ID, and later a shipment tracking ID. A “Run Assay” API might take an experiment config (which sample, what measurement), return an Experiment ID, which you could later use to query another API endpoint for the data. Adaptyv Bio’s API is a prototype of this layer for protein experiments. If more labs and service providers offered similar APIs (perhaps specialized by technique – one for DNA assembly, one for sequencing, one for protein purification, etc.), a developer could write a script that creates a complete experimental workflow across multiple vendors. Crucially, these would be commercial APIs with known turnaround times and data formats, so they can be relied on in production apps. This layer is how we’d achieve true lab automation beyond the confines of a single company’s platform.
- Data & Integration APIs: Surrounding the design/make/test core, we need supporting services – and some already exist. Laboratory Information Management Systems (LIMS) like Benchling provide data APIs to record and retrieve results. There could be standardized lab data schemas so that results from a “Test API” come back in a format easily pushed into a LIMS or notebook. Other integration points include compliance and logistics. For instance, a Biosecurity API (like Aclid’s service) could screen any DNA sequence on-the-fly, and a Shipping API could track biomaterial shipments (FedEx has APIs for packages, which could tie in sample transport). We’d also want webhooks and event-driven integration – e.g. the assay service pings your server via webhook when results are ready, triggering an analysis pipeline automatically. This whole stack would mirror the coherence seen in other industries: like how an e-commerce order triggers inventory and shipping APIs behind the scenes.
- Orchestration & Standards: Finally, an API-first ecosystem benefits from standards. Just as web APIs coalesce around HTTP/JSON and often OAuth for auth, biotech APIs should converge on standard data formats for describing molecules, instruments, experiments. An orchestration layer – perhaps open-source libraries or middleware – could help users chain services. For example, a high-level SDK might allow a user to call a single function
optimizeProtein()
which behind the scenes calls design APIs (to generate variants), calls a make API (to order top candidates for testing), awaits results, then calls data APIs and perhaps iterates with a finetuned model. This isn’t far-fetched – it’s basically treating lab experiments as functions in code. But it will require cooperation and infrastructure companies prioritizing open APIs rather than closed end-to-end offerings only.
The paradigm of a “concept as code” is a powerful one, and has led to incredibly valuable companies that orchestrate this idea. One of the earliest implementations of “Cloud Infrastructure” as code was Vagrant, created by Mitchel Hashimoto, which led to the rise of commitable infrastructure like Docker several years later. Both him and another friend, Armon Dadgar, went on to create a billion dollar company called HashiCorp that orchestrates cloud infrastructure across numerous providers. I’ll leave it up to your imagination what the corollary in biotech might be.
Imagine the possibilities: A biotech founder without a lab could develop a new therapeutic by writing software that orchestrates experiments across multiple contract facilities via API calls. Or a machine learning model could autonomously improve itself by conducting its own experiments. This is the promise of techbio – if we build the enabling infrastructure, and realize that techbio does not simply mean “using models, data, and automation for biology”.
Conclusion: A Call to Action
Biology in the 2020s has the data, the compute, and the engineering talent to be revolutionized much like software was in the 2000s. What’s missing is the glue – simple RESTful APIs and interoperability that allow many players to build on each other’s capabilities. The history of tech teaches us that open APIs create ecosystems, and ecosystems beat monoliths. Travel companies that exposed APIs enabled a flood of bookings they didn’t previously have; banks that opened APIs sparked countless fintech innovations. Biotech can learn from this.
Today, TechBio is failing to reach its potential partly because it’s stuck in a siloed mindset. Too many companies treat their processes as proprietary black boxes, available only through bespoke contracts or web portals, not as services others can compose. The next generation of biotech infrastructure should flip this model: every lab instrument, every cloud lab, every analysis tool – offer a public or partner-accessible API. Competition can still happen on quality, price, and unique capabilities, but with a base level of connectivity, the whole field will accelerate.
As developers and scientists, we should demand and encourage this shift. When evaluating lab vendors or software, ask: “Does it have an API?” Biotech investors and founders should recognize that the network effects and scale of platform business models in tech only came after APIs lowered the barrier to entry. If we want an “App Store” of biology or the kind of rapid pipeline iteration seen in software, we need interoperable building blocks. The likes of Benchling, Twist, Adaptyv, Aclid, and BioLM are pointing the way. It’s now up to the broader industry to follow. It’s time to open up the walled gardens and let biotech join the API economy. The next wave of innovation – and the vision of fully automated, software-driven biology – depends on it.