Foundry founder and CEO Jared Quincy Davis joins Asking For A Trend to give insight into the future of AI infrastructure and Foundry’s role in it.

Davis argues that GPU is one of the most important commodities in AI and that the AI sector is currently experiencing a GPU underutilization issue. Demand for Foundry’s services and the extent of AI growth are “really tremendous” and underestimated, Davis adds.

The Foundry CEO points to compound AI systems as evidence of this underestimation: “[It] hasn’t been priced in yet, but we’re seeing demand from all sectors, from enterprises who are finding in many cases productive use cases for AI that they’re putting in production now more and more, but also from startups, both application companies using AI for end applications.”

For more expert insight and the latest market action, click here to watch this full episode of Asking for a Trend.

This post was written by Nicholas Jacobino

Well, the boom in artificial intelligence setting off a race for compute NVIDIA alone, seeing data center revenue in the first quarter grow nearly 430% from last year.

With this comes the focus to demand expansion for graphics processing units.

And while supply has been a concern, our next guest points to other key factors to consider as well.

Foundry founder and Ceo Jared Quincy Davis joins us now with more Jared.

So describing your company J I thought it was interesting.

I saw um I think it’s actually one of your backers or investors kind of talk about like in this world of A I, they said um GP U is king, right?

And how they describe it is is what you were all doing is sort of making technology to make that that resource that that very scarce competing resource and more widely available.

Is that a good way to think about it?

Yeah, I think it’s a phenomenal way to to put it and part of the way we think about it is I think GP U is arguably one of the most important commodities in all of capitalism.

Now, everyone wants their hands on one, everyone wants their hands on one.

It’s kind of one of the biggest areas of spend for basically every major company, you know, Google spends more on compute a compute specifically than they spend on people costs.

I spends 4 to $5 very conservatively on compute for every dollar in people and typical start ups are spending 2 to $3 on compute now for every dollar in people.

So it’s a major area of spend, you know, that being said, I think people still don’t use the tips very, very well.

Um, there’s actually a lot of things that you can do to map your workloads onto the chips a lot more efficiently and can get a lot.

Ok. Yeah, there’s been a lot of talk of kind of GP U supply shortages and, but I’d argue that rather than an undersupply issue, we actually have more of an underutilization issue.

I mean, so part of what foundry does is we address that we manage our resources really well, we’re kind of a scheduling company, so to speak.

But I’m guessing actually the technology is pretty complex, I would think.

Kind of like Tetris, you know, you have so you have a bunch of workloads, they’re kind of coming in, they’re like blocks.

They have a and you’re basically trying to fit them really efficiently and minimize kind of gaps or waste.

Um And so it’s kind of a variant of a bin packing problem or a scheduling problem, but with a few hacks that we’ve introduced, um, that kind of allow us to do this really, really well.

And what’s the, the service your what’s demand like right now for what you’re offering and where’s the demand coming from?

Yeah, demands, I think really tremendous.

I I think people are still actually underestimating the extent of both the magnitude of the demand but also the extent of the growth and the demand for these A I chips and we can talk about it.

There’s this interesting trend called compound A I systems.

I think it’s feeling just a tremendous growth that hasn’t been priced in yet.

But we’re seeing kind of demand from all sectors, from enterprises who are kind of I think finding in many cases productive use cases for A I that they’re putting in production now more and more but also from start ups, both application companies using A I for end applications, but also for kind of research and development companies building these better base models, these foundation models and that other people will use in their applications.

Yeah, So me, you know, many, many members of our team have a bit of an interesting background.

We worked at the intersection of a few fields that are usually distinct, you know, one being deep learning research.

And so I was at deep mine previously on a team called the core, deep learning team.

And deep minds have now the division of Google, um that’s leading a lot of Google’s kind of frontier A I efforts.

I also me and many other members of our team did our phd S and systems, which is typically distinct from A I.

These are almost different tribes, they don’t mix and we do our phd S in systems under a great P I named M Zaara and who was also the CTO and founder of Data Bricks.

And so we’ve kind of been thinking about these esoteric questions around how to map workloads on to compute really well for a long time.

Um And then also a lot of us have a finance background.

So in between undergrad and my phd, I worked, for example, at a private equity firm called KKR um and learned a lot about data centers a lot about near price software.

And so I think we have a bit of a funny perspective at the intersection of finance systems NML that allowed us to think about how to use how the economics compute would evolve in a bit of a new way, Jerry.

By admin