First in Human Episode #40 featuring Alfredo Andere

Does the convergence of tech and biology hold the key to reshaping biotech’s data infrastructure? Our recent chat with Alfredo Andere, co-founder and CEO at Latch Bio, certainly supports this notion. This episode offers a deep dive into Alfredo’s journey from being part of the Google Brain team to co-founding Latch Bio – a company that is making waves in the biotech industry with its innovative solutions and has raised a whopping $33 million.

Simon Burns: [00:00:00] Thank you for joining us on First In Human, Alfredo.

Alfredo Andere: Thank you for having me.

Simon Burns: I’ve always loved hearing your story. You guys have an interesting team of tech and bio, a really young team, a really impressive what you guys have done in such a short period of time. Take me through the journey. How did you get here? I’d love to hear more about that mix of tech and bio. You guys have interesting backgrounds.

Alfredo Andere: I’d be happy to walk you through that. Any journey of myself would be incomplete without my co-founders, Kyle Giffen, and Kenny Workman. We actually went to school together. We met freshmen-sophomore year, and we always remained friends through school. We were each into our own interests. 

During COVID, we actually started working together on different projects, not a startup, but just working together on many different projects. One project led to the next and to the next. At some point we were like, okay, this is getting pretty serious. We’ve been working together for, think at that point, maybe a year, or like seven months. We’re like, what if we take this more seriously? What if we try and make a company here and make something that’s actually really useful to a lot of people.

In doing that, we realized most of the projects we’ve been doing up to now. Are probably not super valuable, but what is valuable? Why don’t we go out there and figure out where we can add a lot of value. And as we started talking to a lot of people about their problems in their work and learning about just every kind of area, you can imagine one problem kept standing out to us.

And that was data infrastructure in biotech. Going a step back, that summer, I was working myself at Google on their brain team, but really building data infrastructure and seeing the best data infrastructure in the world. It was incredible. I had worked at Facebook before also with incredible data infrastructure.

And what is it being used for? At the end of the day, really for optimizing advertisements. Getting you to click on stuff you don’t really want. And meanwhile, on the other hand, Kenny, comes from a bio background, had been in the wet lab since he was 14, had been interning at Asimov at the time. Asimov was actually pretty great, but other biotech companies and labs, and just seeing the data infrastructure they have there. These companies were trying to cure cancer. You recently launched Battery Bio taking on many of these diseases, heart disease, genetic disease, global warming, and aging. The most inspiring missions you can imagine, and they were transferring data around in hard drives.

Their data infrastructure looked like it was from 20 years ago. And when you talk to people, it was very clear that it was a huge problem. We knew we had to do something about that. But we didn’t know why. So we went and talked to over 200 people about why things were the way they were. We realized that this was much more massive of a problem then we imagined, initially. There was no one doing anything to the quality and rigor that we thought this problem needed to be addressed with. 

We set out to find where do we start? We started again, interviewing companies, but this time asked, “What can we build for you that you will pay for?” At some point we got six companies to pay us. To build no code interfaces for their pipelines. And that’s how we started out as a no code CRISPR pipeline company.

One thing led to another, we got these six companies what they asked for. Some of them were happy, some of them were not, but as they started using it, we realized they’re making no code pipelines, but they also want to make their own pipelines and then have no code interfaces for them and launch them in the cloud. We gave them that and, they were bringing in their data from somewhere else, usually, stored in S3, Google Drive, or Dropbox in a very non-traceable, non-version way, can we make a better data storage for them to keep all their data and then feed it into the pipelines?

And so we built Latch data. Once we built Latch data, we realized people bring in their data to Latch data, then they run it through our pipelines, either through their own custom or through our no code pipelines, but then for the end analysis, they take it back to their local computer and to a Jupyter notebook that is hard to configure, it breaks the traceability of this of all the data here, but they need more custom analysis. We thought, okay, let’s build that component, now. Lastly, we had these components, but you still need a database. They’re using Notion or BenchLink Registry.

Most of them are actually using CSVs hosted on Box. We discovered another problem: can we build one database also hosted here, have that traceability and versioning layer, have that ability to collaborate on the same interface into the platform. We went from a no code interface to now being able to fully replace their cloud computing platform, whether it’s AWS, Azure, ECP for these companies, and save them 95% of setup time to get them going really fast. 

Today, we have about 60 plus paying biotech customers over 100 academic labs using Latch for free. 12 full time people in the company. We’ve raised about 33 million dollars from Lux capital Code Two and others. Our usage is doubling every 6 to 12 weeks. There’s just so much to do in this space to help these inspiring companies. We’re just continuing to build there.

Simon Burns: I love that. I’ve been super-impressed by the software. It’s beautiful. It’s well designed. I’ve been super-impressed by the transparency. You guys have change logs. The [00:05:00] speed you guys move really fast, even the copy, it’s refreshing and light. It feels like you guys are building a modern software company and you’re building a modern software company in a space that doesn’t have a lot of modern software companies. Take me through how you thought about some of the core cultural elements that you need to put in place to do that. It’s not easy. There’s a reason it hasn’t been done yet.

Alfredo Andere: Thank you so much. I can say the same about Vial and the new projects that you guys are launching. I’m also curious to hear your thoughts. But, for us, it’s been about just hiring great people. Then, having these huge constraints on both the people and what each one has to do most innovation just comes from having this great team that is incredibly capable, ambitious and dedicated to a central mission. Pointing them to a shared goal that everyone agrees and stands behind and then just letting them go and giving them the freedom and the resources to get to that goal. 

Obviously, the pressure, inspiration to emphasize that we need to get to that goal for many reasons. The mission, first and foremost. I am genuinely, to this day, constantly surprised just by how much gets done by people that really care when they’re set towards a big hairy goal. Things I couldn’t even have imagined, amounts of effort that are heroic and just get done when you give people that freedom, resources, and that direction towards a shared goal.

Simon Burns: I think we’re both believers in, Elliot Hirschberg, the century of biology. We’re entering this next era. The next era critically needs infrastructure. You are working clearly on that as a core thesis. Why infrastructure is so important and what do you think is going to start enabling once we have the game for built out?

Alfredo Andere: Shout out to Elliot. He’s awesome. If anyone hasn’t read the piece that you wrote recently about bio and battery, it’s super inspiring. But, this infrastructure, currently, what we’re seeing with biotechs and the reason we knew, that the problem we were solving was really large was because we weren’t going to companies telling them we have this new capability that will give you this side thing. We were going to companies and asking them, “How have you solved this problem?” 

Ten years ago, you had pipettes, and Instruments that in your wet lab could tell you the result of your experiment. A cell counter is actually really funny because people outside of biology think of a cell counter as this complex instrument. But then people within biology know a cell counter is just a thing that you click like the stadium people counter and you’re just like counting yourselves through a microscope. Fast forward ten years to today and you have an NGS experiment giving you ten million data points. You can’t count ten million of anything, right? You need a lot of compute that is going to process it. 

This is one example where companies who are doing NGS, which is many companies these days, each one was rebuilding a solution to take that data and put it into interpretable results. This is a part where you wonder why is the infrastructure not there? If we have a hundred companies all setting up the same thing, spending millions of engineering resources, spending lots of their time away from their core differentiating thing, which they’re the only company in the world that is able to do that, and yet they’re spending 50% of their time doing DevOps, which literally every company has to do. 

Can we build the infrastructure so that everyone can just plug into that and start doing it from day one. Companies that do that have been missing in biology for clinical trials, cloud computing, data infrastructure, and many other gaps that probably were not even aware of today that. Companies are reinventing the wheel and then we’ll see some really exciting companies come in to build that infrastructure once and give it to every company. But yeah, I’m super excited about that.

Simon Burns: I’ve seen you talk about the shift from a lack of structured biological language into an era with biological language. It seems like a critical modular step to get to some version of conversion from bits to zeros and ones. Walk me through how you thought about that as a key metaphor and what you’re doing to help put that up.

Alfredo Andere: My co founder, Kenny, wrote this great line in our manifesto: Machine Code of the Biological Programmer. It’s very abstract. But, it represents a vision that’s inspired us all to work on Latch. I was visiting a relatively old lab the other day, built about ten years ago, and I saw a typical cell counter, the one I was telling you about, everyone’s familiar with that, just count cells. Recently, I was visiting Ginkgo, which we all know, huge automation, huge high throughput. The part that stood out to me the most was one of their COVID testing facilities. It was doing tens of thousands of COVID tests per day at its peak. It was my first time seeing a fifteen hundred, thirty six well plate. 

It’s beautiful. It’s tiny. There’s no way a single biologist is actually filling it out. It’s the first time you see that need for automation at the wet lab level. What stood out to me during the Ginkgo visit is that I did not see a single biologist holding a pipette, themselves. They were mostly theorizing about experiments. They would put the plates into machines that would do all the high throughput experiments. Then, they would transport those plates from one machine to the next. Sometimes, not even that, sometimes the transportation would also happen in an automated fashion.

We’re already [00:10:00] seeing this, not just within companies, but within cloud labs. Stratios and other companies that are trying to do this for everyone. They actually used to have an open Python API. They no longer have that for many reasons. They’re trying to bring it back, but just imagine writing a protocol through Python, sending it to Stratios, and It does the whole experiment. And so I believe in a future where not only the bioinformaticians, but everyone in biology will be defining and executing experiments through programming, Python-defined protocols. This combined with other innovations will bring down the cost of biology by many orders of magnitude.

 It reminds me a lot of the impact that open source and cloud had in startups. In 1994, you had to ask investors for permission to start a company. You needed a few million dollars for a Series A to just get your servers, your engineers, and your software business off the ground.

Ten years later, around 2004, Mark Zuckerberg started Facebook out of his Harvard dorm room with a couple thousand dollars that his friend lent him. I dream of a similar future for biology, where a college kid has an idea for a new therapeutic modality. They spend a thousand dollars to send out the experiment to a cloud lab, where it gets executed. By the time that student wakes up the next day, an ML model has actually iterated on the results a few hundreds of times, and show him there’s nothing there. 

But maybe, like Facebook, that kid just happened to strike luck, like Mark did, and they just found a new drug modality, and then are off to the races to put it into computational mouse models, before raising some money so that they can send it to a Vial to run the clinical trials for them and put it into the clinic. I think that’s a really exciting future.

Simon Burns: Sign me up. Josh Koppelman had the line: my first company, few million dollars, second company, there’s $10,000, and by the third company, I was able to MVP. It’s the 1000x plus cost reduction in just over ten years is pretty remarkable. Hopefully that happens for our field. Let’s talk about some of the tech bio companies using you. What are some case studies? How has your infrastructure been deployed real life? I hear about it all the time, but I’m curious to get great stories and case studies.

Alfredo Andere: There are a lot of examples, but one of my favorite is Elsie Biotechnologies, especially because they recently had big groundbreaking success and their scientific journey with life has been like, pretty iconic, but it has been turning into a more repeatable model. Elsie focuses on leveraging DNA and RNA to combat diseases like AMS, Huntington’s Disease, and Alzheimer’s Disease. What they identified is that many RNA therapeutics were failing due to slight variations in therapeutic sequences. 

They began looking for antisense oligonucleotides to ASOs to target these mRNA molecules and prevent the production of problematic proteins. Offering solutions to the progressions of various diseases. They recognize the challenge in the RNA therapeutics field, where it was slight atomic changes in these therapeutic sequences impacted efficiency and employed these ultra high throughput screening of oligonucleotides with the vision that if they screened enough of them, they could enhance the potency, reduce the toxicity and optimize the delivery.

They were employing, obviously, high throughput to do this. They had bioinformatics to design oligo libraries and relied on NGS next generation sequences to test the gene knockdown in whatever deceased models they were using. This process generated a lot of data and led to a lot of delays where their bioinformaticians were bottlenecked. Sometimes they took weeks to get the results back to them. The scientists didn’t have instant access to that data that they were generating. They came to us and they wanted to overcome this bottleneck in data processing by integrating Latch bio and enabling their scientists to easily access that data and run that bioinformatics pipelines themselves.

They did very successfully. This facilitated the library design. It accelerated their barcode analysis many times over. It made their machine learning models accessible to all their scientists. This sounds pretty biased coming from me, but you can ask the CSO, Dylan, and he will rave about it himself. He has told us that the faster design effects and execution of experiments has led from two to four weeks to one to two days turnaround time for one of their computational experiments. An 80% reduction in NGS analysis costs. From $2,000 that it used to cost between the contracting and the compute to $200.

Now, their bioinformaticians can focus on more pressing and challenging analyses that they trained a whole PhD to do, instead of just running data for the scientists. All of this led to a huge acceleration of their core R&D milestones. They were able to screen more oligos [00:15:00] faster and a few weeks ago, LC Bio recently announced a partnership with JSK, who will be harnessing their discovery platform, which part of it lives on Latch to uncover new therapies with, JSK’s data.

By using Latch, they were able to streamline that discovery and they continue to pave the way for RNA therapeutics. We’re super excited about our partnership with them and continue to do a lot of work for them.

Simon Burns: We’d love to talk about some of the challenges you face, some key lessons learned along the way building a tech bio company. Sometimes it comes down to execution risk, to strategy. Break down some of the key lessons you’ve learned the two of those.

Alfredo Andere: One that’s stood out because of recent successes we’ve been having in that area is learning to align our final goal, our final metric, our most important metric to our users’ success. A failure point that we actually had around this in the past was we were measuring revenue as the credits that we bought. For context: on Latch the way that, works is you buy certain credits, think of an arcade or like Snowflake and AWS. You buy certain credits, and then you go into the platform and you can use those credits to run workflows, analyze data to storing data, and some other features.

We were focused on selling credits. It was going relatively well, sporadic and not too predictable, but it was growing. It got to the point where we were selling lots of credits. We were on track to hit a large number that we were looking for. But, it was not translating to credits used, or to time spent on the platform. Most importantly, it was not translating to scientific insights from our customers into the platform. 

This was a huge problem. It felt nice to sell a lot of credits, but if people weren’t using them it meant our product wasn’t solving the problem. We were just charismatic and good at selling. And so in what turned out to be a very painful at the time decision, we took this large goal that we had, and we were decided from now on revenue is still our final north star. But, we only count revenue after a customer has actually spent the credit, not when they initially bought the credit.

This was painful because imagine if our usage was about a 20th of the amount that at the time we were selling, that meant our revenue went down by 20x. Everyone was a bit disappointed. Our numbers are not as good as we thought. And we were like, but it doesn’t matter that it’s small.

It is very small, but we’re just going to focus on doubling it every six weeks. Our sprints are six weeks, currently. We don’t care about 18 weeks. We don’t care about 12 weeks. We don’t care about a year, just for the next six weeks, this small number has to be double that small number. It has hugely aligned our incentives with customers. It has made people in the company deeper than ever into people’s and customers’ science. 

We now understand our top ten customers’ science like so well that we’re in the room helping them with their bioinformatics. We’re brainstorming with them on what they can try next. We’re building nitty gritty bioinformatics stuff for them and we’re building a product that genuinely solves their problem because if they don’t use it, we get no reward.

Over the last 30 weeks, our credits have actually been doubling every six weeks which with exponentials, that means three weeks ago, our goal surpassed our old revenue goal and is continuing to 1. 5 X to two X every six weeks in a repeatable and healthy way. That’s been a huge learning recently we’ve been reiterating and retroing on because it was a big one for us.

Simon Burns: Let’s talk about five years out. Every day on Twitter feels like a new AI diffusion model for biology here. Some new breakthrough there. The pace is only quickening. Lab automation you talked about going from design into implementation mode. Where are we going five years out in tech bio and what gets you most excited?

Alfredo Andere: There are two branches here in terms of Latch five years out. It’s very clear for me. We’re going to replace AWS, Azure, and DCP for biology. When a biologist or a bioinformatician thinks of the cloud in five years from now, they will think of Latch, not of AWS. That, in our minds will save biotechs 98% of the time that they can instead focus into their differentiating science.

 That’s the vision for Latch itself and where I see us going. But, I want to talk a bit more about the larger vision for the tech biospace, because I think that’s, really interesting. I do want to preface with saying that think AI is currently overhyped and it might crash soon? It might not, but it might crash soon, because there’s a lot of money going towards it. I’m also thinking of ten year timelines here, or five year timelines. Many cycles down the road.

I believe the future of biology will be high throughput, irrational drug design. For context for, previously, from the 1950s to the 1980s, we had [00:20:00] Pfizer, Merck, and others identifying targets and then screening them against thousands to tens of thousands of natural candidates. Merck would famously pay for a part of their employees trips if they brought back a dirt sample from wherever they went. They would give them these special vials and they would ask them to bring the samples because then they could screen those new stuff they found in the dirt against these targets.

And then around the 1990s came Regeneron, and Vertex with their crazy ideas, at the time, to make drug design rational using techniques such as crystallography, genetics, and NMR to design the molecule around the target instead of doing this high throughput screening. That has worked massively. Vertex is now a top 20 pharma. Regeneron is,too, as well as many other companies. That’s been going on for the past 30 years with massive success. 

I believe there’s a new shift happening with all these new trends pointing towards millions of data points generated through highly multiplexed biological experiments and interpreted through a trillion parameters, and general function approximators. This is what the future of discovering new therapeutics and biological modalities will be. I used to think that the holy grail of biology would be able to simulate an organism like a mouse or a human down to the atomic accuracy of each cell. Then testing compounds against that through obviously virtually through a perfect simulation. 

I don’t think we will ever get there. The equivalent will be to teach an AI to create a compression of our high throughput data through a model that is not human interpretable, but we can ask questions to, such as, testing thousands or millions of compounds against a specific target and then answering which one we should take to the clinic. It will work every time almost automatically. That’s where I see the future of biology going. I think this is far off. Today we have a lot of data problems to solve first before we can make that happen, but it’s a pretty exciting future.

Simon Burns: I can’t agree with you more. With that, Alfredo, I’m a huge fan of what you guys are up to. Thanks for taking the time. 

Alfredo Andere: Likewise. Thank you so much for the invite.

Contact Us

Name(Required)
By submitting, you are agreeing to our terms and privacy policy
This field is for validation purposes and should be left unchanged.