10 PRINT "ANTHROPIC + MICROSOFT + NVIDIA = MORE COMPUTE, COGNITION, AND CHOICE."
Transcript
Super excited to be here with Dario and Jensen. What both of you are doing in NVIDIA at the silicon layer and Anthropic at the cognition layer is shaping what every developer and every organization will be able to build on going forward. On our side, we have been scaling Invidia across this fungible Azure fleet at the speed of light, something that Jensen's been talking to me about for multiple years now. And we're continuing to set pace, including with the AI super factory we announced just last week, and we've been deepening our partnership with Anthropic. As well incorporating their models across the entire Copilot family. And today, I'm excited to share that we are taking another step. We're increasingly going to be customers of each other. We will use Anthropic models, they will use our infrastructure, and we'll go to market together to help our customers realize the value of AI. This announcement encompasses 4 key things. First, customers of Microsoft Foundry will be able to access Anthropic Cloud models. Second, we are. Engineering access for cloud across our Copilot family. Third, Anthropic is committing to Azure capacity. And finally, NVIDIA and Anthropic are also establishing A partnership to support Anthropic future growth. For us, this is all about deepening our commitment to bringing the best infrastructure, model, choice and applications to our customers. And of course, this all bills on the partnership we have with Open AI, which remains a critical partner for Microsoft and Open AI and our customers and provides more innovation. And choice with that, Dario, why don't you tell us a little more about the partnership, what this partnership means to Anthropic. Yes, in terms of Microsoft, you know, I think as you mentioned, Sacha, you know, both of us believe in choice and you know, we're excited to bring our models as a choice to to to to Microsoft Azure and Anthropic will be the, you know, will be the first model that is available on, you know, the three all three of the biggest clouds. Second, you know, Microsoft has a reputation as a strong. As a strong enterprise company and Anthropic does it as well. So we have the opportunity to work together to go to market together and to provide intelligence to the world together. And you know, we're we're excited to kind of accelerate the diffusion of this technology as the technology continues, continues to improve. And finally, you know, we're very excited to get to get additional capacity on that. You know, we can use both trainer models to support Microsoft first party products and to sell together. And that brings me to the NVIDIA part of it. We are very excited to to add, you know, substantial support and substantial use of Nvidia's accelerators for use by Anthropic. You know, NVIDIA has led the way in this field in many ways, has helped make the, you know this entire AI boom possible. We think this is going to be the beginning, just the beginning of a very long partnership. We are excited to work together to Co optimize models, models together starting to with Blackwell. And then moving on to Vera Rubin, we're excited to announce U2 gigawatts of capacity and that's just for now, that's just where we're starting. And so you know, we're we're very excited to continue that, to continue the Co optimization to further build out and videos already incredible ecosystem. That's fantastic. Well said Dario. And one of our core beliefs is that you can make progress just in one layer of the stack. You have to advance every layer silicon systems, models, applications while optimizing. Have effectively for all the things that customers care about, right, COGS, latency, performance. And one of the things that we're also establishing today is this new partnership as you described between NVIDIA and Anthropic. So Jensen, maybe you should talk a little bit about sort of what you all are excited about. Thanks, Satya. You know, Nvidia's DNA is to build the most advanced computing systems in the world and to accelerate the most challenging workloads in the world on the most important platforms in the world. And this conference call right here embodies that very thing. This is a dream come true for us. You know, we've admired the work of Anthropic and Dario for a long time and this is the first time we are going to deeply partner when that. Anthropic to accelerate cloud. I can't wait to go accelerate cloud. The work that Anthropic has done, the seminal work in AI safety, the advances of clot code, the engineers of NVIDIA love Clyde code. The fact that you can go in and literally refactor your code for you. I mean, it's pretty amazing thing. And and the, the work on MCP, the model context protocol. Has completely revolutionized the agentic AI landscape. And so the contributions of Anthropic, the Advanced Research that is done there, the incredible researchers, the incredible infrastructure team that works at Anthropic, that makes it possible for you to scale up to what you have already done, it's really quite, quite phenomenal. And now your business is on a rocket ship. It's just scaling so incredibly. And so I can't wait to go accelerate. Odd on Grace Blackwell with Nvlink, I'm really, really hoping for an order of magnitude seed U and that's going to help you scale even faster, drive down token economics and really make it possible for us to spread AI everywhere. And so I'm really, really super excited about that now the work that we've done. With Microsoft over the years, Satya is broad and deep. I mean, it's incredible. All the things that we do, the work that we've done to left shift, shift left, all of our engineering so that at the moment we have new technology, it appears on Azure. Notice the scale that we've now already achieved with Grace Blackwell, 200 and 300, the number of systems that are already out there helping researchers pioneer the next frontier. In AI, really fantastic work there, but we do everything from data processing to search to image recognition to fraud detection to right all kinds of stuff that we're doing from. Classical computing to classical mill to generative AI to agentic AI, the work that we're doing just spans the entire range of technology. But what's really, really incredible is of course Microsoft has the world's best enterprise go to market. This is the next giant frontier enterprise and industrial AI. That's where, as you know, the vast majority of the world's economics economy is. And in order for us to. Get to every single enterprise that enterprise go to market takes decades to build U. It is, you know, it's not one of those things. It's just because you put it on the cloud, it's going to be, you're going to be able to serve the world's enterprise. The enterprise go to market is very complicated and this is where the two of us have such great harmony because ENVS computing is in every enterprise. And we're never enterprise in every single country. Now this partnership of the three of us will be able to bring AI, bring Claude to every enterprise, to every industry around the world. And so this is a really exciting time. I'll close with this. As an industry, we really need to move beyond any type of 0 sum narrative or winner take all hype. What's required now is the hard work of building broad durable capabilities together so that this technology can deliver. Real, tangible local success for every country, every sector, and every customer. The opportunity simply too big to approach any other way. Jensen and Dario, any last thoughts on this? I'm just excited to, you know, take a large number of chips and and, you know, use it to serve our mutual enterprise customers together to make the smartest models, the models that run the fastest for the lowest possible cost. Jensen you know, I, I think the world is just barely realizing where we are. In the AI journey, you know we're seeing 3 scaling laws happening at the same time. Pretraining is scaling incredibly well still post training, the more compute you give it, the smarter the AI. And then of course inference time scaling, test time scaling, you know, the more the AI thinks, the higher the quality of answer. And so we're now at a point where AI, where it is very clear that the more compute we give it, the more cost effective compute we give it, the more the smarter the tokens, the smarter the AI is going to be and the smarter to AI. The more adoption. Both in the new applications that integrate these AI APIs and the more frequently how more frequently you use it. And so the quality of these AI models now have really reached an inflection point. And so we've got these two simultaneous exponentially increasing compute demand and to I guess the thing that's really great is you know they're going to need a lot more Azure compute resources and they're going to need a lot more GPU's and, and. We're just delighted to partner with you, Dario, to to bring AI to the world. Absolutely. Now, thank you so much, Dario and Jensen. I'm really looking forward to everything that we're going to build together and more importantly, how customers can benefit from all this innovation and really thrive with their business and their outcome. So thank you all for joining today. Thank you. Thank you.
microsoft getting back to BASICs ahahaha
This level of consolidation around compute and model power is becoming a geopolitical story, not just a tech one. One gigawatt of AI capacity is national infrastructure scale.
Satya Nadella, thoughtful vision shared. What if we treated compute not just as power, but as a global nervous system that needs emotional intelligence too? How do we ensure we grow wisdom and compassion at the same pace as cognition and capacity? 🤔
Big moment for Microsoft. A 1-GW Anthropic commitment reinforces Azure as the hyperscale AI platform, and bringing Claude into Azure AI Foundry + Copilot makes the ecosystem even stronger.
Anthropic’s decision to scale Claude on Azure—backed by NVIDIA’s Grace Blackwell and Vera Rubin systems—marks a convergence of three forces: hyperscale cloud, frontier models, and co-designed silicon. A $30B compute commitment and up to a gigawatt of capacity isn’t just about raw scale; it’s about treating AI infrastructure as a long-term utility, where efficiency, performance, and TCO become first-class design constraints. The deep engineering partnership between Anthropic and NVIDIA suggests a feedback loop where model optimization informs hardware roadmaps, and hardware evolution accelerates model capability. For Azure, this expands enterprise choice while reinforcing its position as the backbone for frontier AI. What’s striking is the triangulation: Microsoft, NVIDIA, and Anthropic aligning capital, compute, and co-design. It’s not just another partnership announcement—it’s a signal that the next era of AI will be defined by integrated ecosystems rather than siloed innovation.
Exciting to see this level of collaboration fueling innovation in compute and AI, curious to see how these combined strengths will empower product teams and end users alike.
Love this initiative, more innovation, better for business and society!
The real unlock isn’t just more compute it’s Claude optimized for Grace Blackwell & Rubin. That means lower inference cost, longer context windows, and better tokens/sec for enterprise workloads. Pair that with Azure-native routing + private networking, and you’ve got a serious edge for RAG-heavy apps and DB-driven copilots.
To view or add a comment, sign in