Satya Nadella’s Post

Satya Nadella

Satya Nadella is an Influencer

Chairman and CEO at Microsoft

Just in time for test-time scaling, we have our first NVlink 72 clusters live in Azure. Here’s to the next generation of AI built on these systems!

Exciting advancements for AI and cloud technology!

Great milestone! Can't wait to see how these systems push the boundaries of AI capabilities

Great to see the next gen AI built on NVlink 72 clusters live in Azure, congratulations! Satya Nadella

It’s so exciting! With the launch of the first NVlink 72 cluster on Azure, we are at the forefront of the AI revolution. We look forward to the infinite possibilities brought by this technological advancement, and hope that future AI applications will be smarter and more powerful! 🎉🚀

Why this is game changing: Each rack can deliver up to 1.4 exaFLOPS (floating point operations per second) of AI compute power. This is equivalent to performing 1.4×1018 calculations every second, making it incredibly powerful for complex computations. The clusters feature 13.5 TB of high-bandwidth memory per rack.   up to 1800 GB/s of GPU-to-GPU bandwidth. This means the GPUs can communicate with each other extremely quickly, enhancing the overall performance when running parallel tasks. The NVLink 72 architecture boasts 25 times the energy efficiency compared to previous models. 

This is a lot of Nvidia Cuda computing; you may need a nuclear power plant to provide the electricity.

That's a good looking datacenter!

Crazy Fast Calculations! Think of it like this: A super brain that can do 1.4 quintillion calculations every second. That's like if every single person on Earth was solving a million math problems every second. It’s that powerful.

That’s a fantastic milestone! 🎉 Satya Nadella. The launch of #NVLink 72 clusters in #Azure signifies a big leap forward for scaling #AI workloads. Here’s why this is exciting: Unprecedented Bandwidth NVLink 72 offers 900 GB/s of bandwidth per #GPU in systems like the NVIDIA H100, which ensures faster data exchange between GPUs and accelerates large-scale model training and inference. Test-Time Scaling With test-time scaling, you can seamlessly scale up computational resources to fine-tune and optimize massive #AI models, enhancing performance on real-world tasks. #Azure Integration Deploying these systems in #Azure means democratizing access to cutting-edge infrastructure, allowing enterprises and researchers to build the next generation of #AI solutions without needing on-premise #supercomputers #AI Model Breakthroughs #NVLink 72’s efficiency and bandwidth are perfect for LLM, Generative AI Reinforcement learning and Multi-modal models Here’s to pushing boundaries and unlocking even greater potential in #AI development. 🚀 Object Automation Calista Redmond Jensen Huang Michael Gschwind Sameer Shende Arghya Kusum Das University of Oregon UC San Diego University of Alaska Fairbanks

See more comments

To view or add a comment, sign in

More from this author

Explore topics