First in Human Episode #31 featuring Ganesh Padmanabhan

Ganesh Padmanabhan's perspective on first in human trials.

For episode 31, we sit down with Ganesh Padmanabhan, CEO at Autonomize AI. Find out how Autonomize AI is working to better contextualize healthcare data by applying human-centric artificial intelligence to the problem. First In Human is a biotech-focused podcast that interviews industry leaders and investors to learn about their journey to in-human clinical trials. Presented by Vial, a tech-enabled CRO, hosted by Simon Burns, CEO & Co-Founder & guest host Co-Founder, Andrew Brackin. Episodes launch weekly on Tuesdays.

For episode 31, we sit down with Ganesh Padmanabhan, CEO at Autonomize AI. Find out how Autonomize AI is working to better contextualize healthcare data by applying human-centric artificial intelligence to the problem. First In Human is a biotech-focused podcast that interviews industry leaders and investors to learn about their journey to in-human clinical trials. Presented by Vial, a tech-enabled CRO, hosted by Simon Burns, CEO & Co-Founder & guest host Co-Founder, Andrew Brackin. Episodes launch weekly on Tuesdays.

Andrew Brackin: [00:00:00] This is First In Human. I’m Andrew Bracken, the co-founder of Vial. Vial is a tech-enabled CRO built for small to medium sized biotech, offering faster and more efficient trials. Today we’re here with Ganesh Padmanabhan, the CEO of Autotomize AI. How are you doing Ganesh?

Ganesh Padmanabhan: I’m doing fantastic, Andrew. Thanks for inviting me and thanks for having me on the show.

Andrew Brackin: I’m thrilled to be digging into your business, today and talking about AI in healthcare and clinical trials. It’s such a massive topic right now. I’ve actually heard a lot less about AI in clinical trials, and so I think it’ll be interesting to learn about what you’re building. Why don’t you tell us what the business does, and what inspired you to start it? 

Ganesh Padmanabhan: Thank you again for inviting me and to ask you about my story and Autonomize’s origin story as well. I’ve been in technology all my life. After spending 15 years in big tech, I was at Intel and Dell. I ran several data and AI infrastructure related businesses from the beginning. After my corporate career, started a few companies mostly in AI. So earlier was an AI explainability platform that got sold into an enterprise AI company . I then left to co-found a company called, Molecular, which is about aggregating data to ensure better AI experiences with data.

After a change of control and an exit late in 2020, I was looking at what to do next with my life? That was the year of the pandemic. The whole world was going upside down. On the one hand it was a realization that, look, I’ve spent all my life helping people shop more and search more with ai. I can do something a little bit more meaningful than that. I see people like you trying to make a difference in people’s lives and humanity. I also fundamentally believe in the origin of it. 

The technology world is moving so fast and rapidly. Applying that technology to improve the human condition is probably the ultimate calling for any technologist. That was the origin story. Then, after digging in the last few years, we finally launched the company, early-to-mid last year. But, we started digging into the space and we understood a few things in healthcare, which is very different from other industries.

Number one, healthcare was a very services heavy industry. So heavily, human knowledge, worker centric industries. As a result of it, even most of the data generated is generated by human beings for consumption by other human beings. That caused the silos and the fact that it’s all in different pockets and so forth. The data is incredibly unstructured. Reasoning, asking questions, and getting answers from the data is a human effort. You can have all the algorithms you want, but it’s not prepared for those algorithms. 

Lastly, healthcare is a highly contextual industry. The same piece of data you have a doctor’s note for a patient, how a clinician views that and reviews that to make decisions on clinical decision support for the patient versus how that same thing is looked upon and reviewed by the clinical trial coordinator to select that patient is an entirely different context. How do you contextualize that data? 

That was the origin story for Autonomize. We launched this company with a very simple objective: how do we help healthcare innovators innovate on healthcare data faster, better, and, in a more contextual fashion delivering contextual experiences for patients and putting patients in the center of it. We were enabling a technology stack to work on the multi-structured, multi-modal healthcare data to deliver winning patient experiences.

Andrew Brackin: What does that look like today? It’s such an exciting opportunity, but I feel like there are so many problems. You make your great point with so much unstructured data in healthcare. So many errors you could tap into. What does the V1 look like? What are they using you for?

Ganesh Padmanabhan: We didn’t go about building the initial platform when we started. We wanted to go into a place where we have reasonable confidence that a slight tweak can deliver an arch outcome. We stumbled upon clinical trials, to be honest with you. We’ve all been in the periphery of clinical trials. You realize the experience around that is terrible for patients and also for the other things. Where we started was, let’s look at this clinical development process or the clinical trial process and see what parts of that equation can we affect and empower innovators to do better. We started with the longest part of the journey which was the patient recruitment and retention of a trial. 

What we noticed there was a few different challenges. One was definitely the availability of patients trying to find them where they are. But then once you identify high level you have a few patients in these geographies, these sites, there was an inordinate amount of time being spent reviewing their data across multiple silos. Your lab reports, EHR data, patient reported outcomes, their call center logs, everything to identify the fit of a patient for a trial. 

We decided to take that on. We got our first customer, which is a leading fast-growing CRO. They were running a large oncology trial, looking at 40,000 patients. They were going to select less than 500 from it. They had 40,000 patients across 20 different sites that they had access to. They would get all that data into PDF forms through a health information exchange. Put into a data lake and put a secure PDF viewer on top of that.

And they’re one of the better ones. To give you an idea, right? They would have these PIs and CRAs really review that in detail to select the patient that meets all the 20 criteria, inclusion and the 50 exclusion criteria. What we did was applied natural language processing to our large language models on it. We turned the [00:05:00] unstructured EMR and lab data, contextualized it in a way to actually provide a match against the clinical trial criteria. 

We ended the first year with several early customers who helped us shape that further. Since then, we have extended into two sites of that same equation. Prior to launching the trial, we found the design of the protocol was a choke point. What we do is actually help apply the same language models, and NLP to understand the impact of the IE criteria on patient and site burden. So you can drive better site feasibility studies, but also help simplify it.

On the other side of the spectrum, we also noticed everybody was struggling with patient engagement. If you’re recruiting for a trial or if you’re looking for a trial as a patient, the average experience today is you go on a website, look at 300 radio buttons to select which conditions you have, click submit, and hope to get a call in the next two days. Then you go back and forth. Three weeks later, you’re being referred to a site. We want to change that to more conversational, contextual, and patient oriented. We’re working with some early customers on that front, too.

Andrew Brackin: Obviously the last six months in AI have been pretty game changing and shifted the whole conversation, right? The producer of the podcast and I were talking before this, and we use LLMs (Large Language Models) now to help us think about podcast questions. We use ChatGPT on a daily basis for parts of our job. It’s just been a massive shift. It’s now one of the top 10 websites in the world. 

How have those models changed your plans? I’ve talked to AI founders, it sounds like a lot of companies have thrown out some of their older models and older technology and replaced it with this new generation of models we’re seeing from open AI and other companies. How has that shifted your plans and the way you are building your technology?

Ganesh Padmanabhan: We had the advantage of actually only starting last year. This is one of those places where everybody says, oh, we should have started an AI company five years ago. And the answer is no, absolutely not. Because you would’ve developed enough technical debt that you can’t unwind from it. We noticed that when we were launching early last year, that from a technology perspective, we have folks who wrote the most highly performant clinical LLM along with Nvidia called Megatron. The lead author for that is on my team. There are folks who have been in the biomedical space in large companies like Novartis and Merck, who have actually worked on early versions of fine tuning the transformer models for clinical information and biomedical information. They’re part of the team, so we were already thinking about this space.

When GPT2 was launched early March of last year, we really knew that, okay there is a generative potion here we have to pay really good attention to. We spent all of last year identifying problems and experimenting rapidly and trying to see what solves a problem or doesn’t. There’re a lot of things that we threw away in that process. We were like, “oh, we should try this.” People said, “oh, you’re going to go after pharma, so you should actually do safety and pharmacovigilance. But then you try to look at contextualizing it, the models that we built didn’t make sense, because that’s a very narrow model for doing one specific thing.

We were lucky, in that regard, that we actually set our architecture. Initially, we were using language models. What changed ever since this happened in these wide variety of updates that happened over the last four months is a few things. 

Number one, it really helped us understand the paradigm of human-machine interaction. Chat GPT was not, and it’s still not, the best performing machine learning model out there. But they cracked the nut on capturing people’s imagination to get them to engage with intelligence systems and get answers.

Obviously, most people are still using ChatGPT as a search engine than what it could do in terms of reasoning and prompt chaining and things like that. So, that’s one thing we need to pay attention to. I’ll tell you an example of how we played that in product. We have an early customer who is an insurance company; a healthcare payer. They were using us for their clinical data review for prior authorization. It was always the tussle in AI. It was never a technology problem. AI was always an adoption problem. If you have a medical practitioner who’s looking at this and you have machine learning engineers who have figured out what insights they need to get thrown up on the screen, they’ll say, “I know what to ask of the data. Why are you telling me things that you’ve packaged up for me?”

We added a question answering module to that thing where we asked it to talk to the document, talk to all the clinical data about that patient and find out all the questions we had. Our adoption in their enterprise went through the roof. So, human-machine paradigm, the interaction paradigm is one. 

The second thing we also realized is, this is fundamentally going to open up the pace of innovation. If you look at historically how you build software systems and so forth, you always go from somebody describes the problem, you write code on it, and then you further down develop the models for it. The API for an LLM is human language. What this unlocked for us was it made progress a lot faster. If you want to chain a bunch of different LLMs to perform a task, instead of taking the output, doing the normalization, converting that to code and feed it to the next model, now you can have languages and interface between systems, humans, humans and humans, humans and systems, systems and systems. That was a huge unlock. It helps you innovate and experiment faster, and that’s the name of the game. 

Lastly, we are in a process of not trying to be the best LLM [00:10:00] manufacturer in there. We have our own LLMs that we fine-tuned. I fundamentally think the future is not large models that are inefficient for doing small tasks but, LLMs that are multitask capable, but narrowly optimized for a particular proprietary data set and so forth. One thing we learned in this process is for application in healthcare, you need to build a fabric around trust, privacy, and security. So we doubled down on that and that’s where we are heavily focused on, in that regard.

Andrew Brackin: Great point that I wanted to touch on. So, thanks for bringing us there. It’s obviously a huge concern for healthcare companies, patients, and physicians. If you are using any of these kind of open models, like OpenAI, ChatGPT4, ChatGPT3, there’s a huge concern that your data is being used to inform the model and improve it. There’s this huge potential vulnerability around data and privacy. How are you addressing that with your business? How are you getting customers okay with the idea of feeding this data into the model? How are you addressing that?

Ganesh Padmanabhan: In a couple of different ways. It’s not just the fact that if you use ChatGPT4 to summarize a patient chart. Not only are you training the model with that, but the process creates these things called embeddings which retains the memory of that patient and any PHI. It may, and it doesn’t always do it. Which is the bigger danger? Somebody quitting the model for something else will just pop up: hey, take a look Andrew’s medical report from his path report as a sample. Wait, what happened? Is it real? That’s the bigger security hole in this whole process. But we’re addressing it in a number of different ways. 

Number one, we don’t use ChatGPT. The hosted models that are black boxes are not what we are using in our platform. We have self-hosted models that we host in secure HIPAA compliant, Amazon, and Azure Cloud environments, and soon on the GCP environment, as well. We have the flexibility of bringing the models to the data so technically we can actually deploy it behind the firewall of our customer so that the data never leaves their cloud environment. So that’s one way we are doing it.

The second thing is we have innovated around some privacy filters. If you’re a healthcare organization that is looking at using ChatGPT, and really convinced that it’s going to help you talk to me because we can put a privacy filter between ChatGPT and your data. There’re a bunch of different tools in this process to do that. 

The one thing we fail to understand is that the model is like a living organism. You have to figure out, and design the human-machine collaborative loop into it. Now, ChatGPT or OpenAI does it as reinforcement learning with human feedback. Cohere does it from a different perspective. How do you give that toolkit to the healthcare organization to define their policy on how their interaction with the model’s going to be?

Those are all the different areas that we are actually focused on. But, in most cases, those who are concerned about privacy and use cases, especially clinical data. We just deploy our system behind the firewall. We push that thing in there.

Andrew Brackin: I wonder how many inbound emails you’re going to get after this podcast. Healthcare leaders are going to be listening to this like, I want that, I want an LLM for my dataset.

Ganesh Padmanabhan: Thank you, Andrew. I appreciate it and please bring it on. Send me a note at Ghanesh@autonomize.Ai. 

 Healthcare has always been about patient centricity and making sure you’re doing things for the patient. Somewhere along the line, it just became about health tech, technology and all those different things. Powerful tools like LLMs should not be compromising what you are initially focused on doing. Just because you can shave off 30 minutes of a nurse’s shift, which is amazing for a hospital system. But the trade off for that should not be patient privacy, patient data that’s being sent to a public cloud location. We are building a series of tools that can be used while you leverage the latest and the greatest and the most innovative solutions, but then keep in mind your patient-centric design principles and make it happen.

Andrew Brackin: I haven’t seen that many competitors yet focused on this problem. Mostly because, I, like you, know how challenging it is building in this space. You are dealing with an incredible amount of paperwork and I think people from outside of healthcare don’t get that excited about solving these problems. People in healthcare dream of having better technology to solve these problems. I imagine you’re going to have other competitors that will go after similar problems. How do you think about building a moat in this business? 

Ganesh Padmanabhan: The two things that drive us really in this process is we’re incredibly mission focused. Number one is patient centricity. Put the patient in the center of the system. Whatever we need to do. Whatever use cases we deliver, whatever dataset that we touch, all in the service of the patient. That is a very good guiding framework because, honestly, what that allows us to do is choose the things that we want to do that adhere to that vision. Because, as you know, as an early company, the biggest thing as you’re going all over in a place like this. 

Number number two is, we believe generative AI is transformative for every industry, especially for healthcare. Layering a layer of trust around it is a big focus of what we are trying to do. It’s going to be trusted generative AI. That’s what we want to be known for in the foreseeable future of patient-centric, trusted, generative AI. 

Lastly, the plan A and plan B is, just look at plan A, is go build a customer mode. We’ve been incredibly lucky in our very short, (a little over a year) existence. We got early customers. Several large Fortune 500 companies in healthcare who are leveraging our capabilities; exploring it and, [00:15:00] piloting it in different forms and so forth. We’ve been incredibly lucky and we intend to continue to double down on that momentum. Don’t focus on the competition, go to this true north, which is the patient and the customer in this case. We’re just going to go drive through that.

Andrew Brackin: That’s great. Final question here. Sounds like you’ve been involved in a number of very successful businesses and you’re at it again in a challenging time with these markets. What’s your advice to entrepreneurs building right now in healthcare? A year in, I imagine you’ve got a ways to go to build a massive business. What is your advice for other people that wanna do that and follow your path? 

Ganesh Padmanabhan: If you’re already building, just keep building. Keep focusing. In turbulent times like this on the funding scene, but also exciting times in the technology scene, are just kind of super weird, right? In this environment that we live in. There is this rational exuberance and excitement about technology and what it can do. Whereas we are are worried about the economy, the funding round, the macro situations we are in and so forth–

Andrew Brackin: — AI is moving so quickly that investors are confused by what to do. We’re seeing so much change so so quickly. 

Ganesh Padmanabhan: In that scenario, the only true north I would look for is to focus on the customer. You’ve started companies before with Vial and before that too, and myself. Being obsessed on the problem and the customer is going to see you through this process. 

Last year when we started, we had this wide blank slate and we were saying, we’re going to do so many different things here. Let’s see what really sticks. But, I also didn’t personally want to go and raise a boatload of money like we’ve done with some earlier ventures, too. I was going into an industry that I wasn’t very familiar with, my co-founder comes from a health tech background. We wanted to build conviction for ourselves that there are problems worth solving that somebody is willing to pay for and then can generate outside returns for everybody.

Once we validated that, I’m assuming this comes out after our fundraising announcement happens, that’s when we went and raised our first venture-backed financing round. We’re excited to have our early investors like Asset Management Ventures, Loop VC, ATX Venture Partners, several angels, and high net worth pharma executives, if you will.

We live in incredible times. There’s never been more positive outlook because if you believe in technology as religion, as Balaji (Srinivasan) wrote a long time ago, if you believe in technology driving the greater good, that everybody has a better quality of life over time, you just will look back and say, shit, that was hard but, just focus on the true north, focus on the customer and you’ll see it through the other side.

Andrew Brackin: Ganesh, thanks so much for the time today. This was an incredibly interesting discussion and so cool to dig in on how AI can really empower a next generation of healthcare tools. Good luck with the business.

Ganesh Padmanabhan: Thank you so much, Andrew. I look forward to the announcement when you launch it, so that’s awesome.

Connect with us.

Interested in receiving a proposal from Vial? Leave us a message and some of your contact info and we’ll be in touch with you shortly.

Name(Required)
By submitting, you are agreeing to our terms and privacy policy
This field is for validation purposes and should be left unchanged.

Contact Us

Name(Required)
By submitting, you are agreeing to our terms and privacy policy
This field is for validation purposes and should be left unchanged.