This Data Center Stock Could Go Parabolic Following Nvidia’s Blackwell Launch - https://lnkd.in/d3_Zjdkf #cryptocurrency #bitcoin #news Early signs indicate that Nvidia's Blackwell GPUs are going to be a big winner for the company. Believe it or not, Nvidia was once a company primarily focused on the gaming market. But over the last two years, the company has emerged as the world's preeminent AI operation. How did that happen? Well, one of Nvidia's core products is an advanced chipset known as a graphics processing unit (GPU). GPUs are a critical piece of infrastructure in developing generative AI, and they've become a business worth tens of billions of dollars for Nvidia. Later this year, Nvidia is expected to release its most powerful GPU architecture yet -- known as Blackwell. While this will surely be a tailwind for the semiconductor darling, I see another tempting opportunity that's hiding in plain sight. Below, I'm going to break down how the Blackwell launch could make data center company Vertiv (VRT -2.19%) a lucrative choice for AI investors. How big is Blackwell going to be? It's hard to say for certain how big of a business Blackwell will become for Nvidia. But with that said, some early trends uncovered by industry research analysts are hinting that the release is going to be a big hit. Last month, Morgan Stanley noted that their forecasts indicate Blackwell could generate $10 billion in revenue just in Nvidia's fourth quarter. Shortly thereafter, Morgan Stanley analyst Joseph Moore issued a report stating that Blackwell GPUs are already sold out for the next 12 months. I guess Nvidia CEO Jensen Huang wasn't kidding when he said that demand for Blackwell is "insane." These tailwinds are undoubtedly a good sign for Nvidia. Below, I'm going to detail why Nvidia's success with Blackwell should parlay into a tremendous opportunity for Vertiv. Image source: Getty Images. Why Vertiv should benefit At their core, GPUs have the ability to process sophisticated programs and algorithms that help train machine learning applications or large language models (LLMs). While this might sound simple, GPUs are far more complex than running a software program on your laptop. IT architecture specialists such as Super Micro Computer or Dell Technologies help build the infrastructure that houses GPUs. Essentially, GPUs are integrated into clusters on server racks that sit inside data centers. Since the chipsets are constantly running programs and processing data, it's not surprising to learn that data centers consume high levels of energy and run the risk of overheating. Right now, data centers typically rely on air conditioning units, power generators, and fans to offset heat. However, as AI infrastructure spend continues to rise, data centers are going to need to identify more efficient ways to tame heat management. Thi
IGKStore’s Post
More Relevant Posts
-
Nvidia Reveals Blackwell B200 GPU, the 'World's Most Powerful Chip' For AI: Sean Hollister reports via The Verge: Nvidia's must-have H100 AI chip made it a multitrillion-dollar company, one that may be worth more than Alphabet and Amazon, and competitors have been fighting to catch up. But perhaps Nvidia is about to extend its lead -- with the new Blackwell B200 GPU and GB200 "superchip." Nvidia says the new B200 GPU offers up to 20 petaflops of FP4 horsepower from its 208 billion transistors and that a GB200 that combines two of those GPUs with a single Grace CPU can offer 30 times the performance for LLM inference workloads while also potentially being substantially more efficient. It "reduces cost and energy consumption by up to 25x" over an H100, says Nvidia. Training a 1.8 trillion parameter model would have previously taken 8,000 Hopper GPUs and 15 megawatts of power, Nvidia claims. Today, Nvidia's CEO says 2,000 Blackwell GPUs can do it while consuming just four megawatts. On a GPT-3 LLM benchmark with 175 billion parameters, Nvidia says the GB200 has a somewhat more modest seven times the performance of an H100, and Nvidia says it offers 4x the training speed. Nvidia told journalists one of the key improvements is a second-gen transformer engine that doubles the compute, bandwidth, and model size by using four bits for each neuron instead of eight (thus, the 20 petaflops of FP4 I mentioned earlier). A second key difference only comes when you link up huge numbers of these GPUs: a next-gen NVLink switch that lets 576 GPUs talk to each other, with 1.8 terabytes per second of bidirectional bandwidth. That required Nvidia to build an entire new network switch chip, one with 50 billion transistors and some of its own onboard compute: 3.6 teraflops of FP8, says Nvidia. Further reading: Nvidia in Talks To Acquire AI Infrastructure Platform Run:ai Read more of this story at Slashdot.
To view or add a comment, sign in
-
🌟 Major announcements from Nvidia this week... and a significant rise in their stock price 📈 Nvidia's CEO Jensen Huang spoke at COMPUTEX 2024: ➜ Sustainable Computing: Highlighting the power of accelerated computing, Huang shared that the combination of GPUs and CPUs can deliver up to a 100x speedup with only a 3x increase in power consumption, achieving 25x more performance per Watt over CPUs alone. ➜ Innovative Chips on the Horizon: The Blackwell Ultra chip, set for a 2025 release, continues to push the boundaries of AI technology. Following that, the Rubin platform, expected in 2026, will feature new GPUs, a new Arm-based CPU—Vera—and advanced networking capabilities. ➜ NIMS – NVIDIA Inference Microservices: Pre-trained AI models provided as optimized containers, ready for deployment in the cloud or private data centers. Meanwhile Nvidia’s valuation surpassed $3 trillion yesterday making it the third company ever to reach that milestone. 🚀 Nvidia also has a 10-to-1 stock split effective tomorrow, June 7, likely contributing to the recent stock surge as well More info: https://lnkd.in/dAY9zww9 #NVIDIA #COMPUTEX2024 #AI #ArtificialIntelligence
‘Accelerate Everything,’ NVIDIA CEO Says Ahead of COMPUTEX
blogs.nvidia.com
To view or add a comment, sign in
-
Prediction: Nvidia Stock Is Going to Soar in the Remainder of 2024 by [email protected] (Anthony Di Pizio) via The Motley Fool ([Global] oracle cloud) URL: https://ift.tt/GHSg9FJ Nvidia (NASDAQ: NVDA) stock jumped 150% in the first half of 2024, but it has been in a slump since mid-July as investors digested some potential headwinds. There was a rumored delay with the company's latest artificial intelligence (AI) chips for the data center, which are responsible for the majority of the company's revenue. Analysts have also questioned how long Nvidia's top customers will continue spending billions of dollars on their AI aspirations. However, those concerns might have been put to rest over the last few weeks. Here's why Nvidia stock could soar to new highs before the end of 2024. All eyes are on the new Blackwell GPUs Nvidia's flagship H100 graphics processor (GPU) for the data center set the benchmark for the AI industry last year. GPUs are designed for parallel processing, which means they can handle multiple tasks at once while maintaining a high throughput. They also have built-in memory so they are ideal for processing large volumes of data, which is critical when training AI models and performing AI inference. According to Nvidia CEO Jensen Huang, data center operators could spend $1 trillion building GPU infrastructure over the next few years. That presents a substantial opportunity, which is why the company continues to design new chips with more processing power and better energy efficiency to stay ahead of the competition. Nvidia is now shipping its new H200 GPU, which can perform AI inference at almost twice the speed of the H100, while consuming half the amount of electricity. But the company recently unveiled an entirely new GPU architecture called Blackwell, which promises an even greater leap in performance. The new Blackwell-based GB200 NVL72 system, for example, will perform AI inference at a staggering 30 times the pace of the equivalent H100 system. Each individual GB200 GPU will be priced between $30,000 and $40,000, which is similar to what many customers originally paid for the H100, so it's going to drive an incredible improvement in cost efficiency. As a result, demand is likely to significantly outstrip supply. In fact, Huang says Blackwell GPUs will bring in billions of dollars in revenue in the fourth quarter of fiscal 2025 (which begins in November) as the company ramps up shipments to customers, squashing previous rumors that the new chips could be delayed by months. Many of Nvidia's top customers are begging for more GPUs Huang says data center operators can earn $5 in revenue over four years for every $1 they spend on GPUs, by renting the computing power to AI developers. That's why the world's largest cloud providers, like Microsoft, Amazon, and Oracle, are clamoring to get their hands on as many chips as possible. Other technology companies also want more chips to develop AI for...
Prediction: Nvidia Stock Is Going to Soar in the Remainder of 2024 by newsfeedback\@fool.com \(Anthony Di Pizio\) via The Motley Fool \(\[Global\] oracle cloud\) URL: https://ift.tt/GHSg9FJ Nvidia \(NASDAQ: NVDA\) stock jumped 150% in the first half of 2024, but it has been in a slump since mid-July as investors digested some potential headwinds. There was a rumored delay with the company's...
fool.com
To view or add a comment, sign in
-
Nvidia Reveals Blackwell B200 GPU, the 'World's Most Powerful Chip' For AI: Sean Hollister reports via The Verge: Nvidia's must-have H100 AI chip made it a multitrillion-dollar company, one that may be worth more than Alphabet and Amazon, and competitors have been fighting to catch up. But perhaps Nvidia is about to extend its lead -- with the new Blackwell B200 GPU and GB200 "superchip." Nvidia says the new B200 GPU offers up to 20 petaflops of FP4 horsepower from its 208 billion transistors and that a GB200 that combines two of those GPUs with a single Grace CPU can offer 30 times the performance for LLM inference workloads while also potentially being substantially more efficient. It "reduces cost and energy consumption by up to 25x" over an H100, says Nvidia. Training a 1.8 trillion parameter model would have previously taken 8,000 Hopper GPUs and 15 megawatts of power, Nvidia claims. Today, Nvidia's CEO says 2,000 Blackwell GPUs can do it while consuming just four megawatts. On a GPT-3 LLM benchmark with 175 billion parameters, Nvidia says the GB200 has a somewhat more modest seven times the performance of an H100, and Nvidia says it offers 4x the training speed. Nvidia told journalists one of the key improvements is a second-gen transformer engine that doubles the compute, bandwidth, and model size by using four bits for each neuron instead of eight (thus, the 20 petaflops of FP4 I mentioned earlier). A second key difference only comes when you link up huge numbers of these GPUs: a next-gen NVLink switch that lets 576 GPUs talk to each other, with 1.8 terabytes per second of bidirectional bandwidth. That required Nvidia to build an entire new network switch chip, one with 50 billion transistors and some of its own onboard compute: 3.6 teraflops of FP8, says Nvidia. Further reading: Nvidia in Talks To Acquire AI Infrastructure Platform Run:ai Read more of this story at Slashdot.
To view or add a comment, sign in
-
https://lnkd.in/eXR_F_zh GTC—NVIDIA today announced its next-generation AI supercomputer — the NVIDIA DGX SuperPOD™ powered by NVIDIA GB200 Grace Blackwell Superchips — for processing trillion-parameter models with constant uptime for superscale generative AI training and inference workloads. Featuring a new, highly efficient, liquid-cooled rack-scale architecture, the new DGX SuperPOD is built with NVIDIA DGX™ GB200 systems and provides 11.5 exaflops of AI supercomputing at FP4 precision and 240 terabytes of fast memory — scaling to more with additional racks. Each DGX GB200 system features 36 NVIDIA GB200 Superchips — which include 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs — connected as one supercomputer via fifth-generation NVIDIA NVLink®. GB200 Superchips deliver up to a 30x performance increase compared to the NVIDIA H100 Tensor Core GPU for large language model inference workloads. “NVIDIA DGX AI supercomputers are the factories of the AI industrial revolution,” said Jensen Huang, founder and CEO of NVIDIA. “The new DGX SuperPOD combines the latest advancements in NVIDIA accelerated computing, networking and software to enable every company, industry and country to refine and generate their own AI.” The Grace Blackwell-powered DGX SuperPOD features eight or more DGX GB200 systems and can scale to tens of thousands of GB200 Superchips connected via NVIDIA Quantum InfiniBand. For a massive shared memory space to power next-generation AI models, customers can deploy a configuration that connects the 576 Blackwell GPUs in eight DGX GB200 systems connected via NVLink.
NVIDIA Launches Blackwell-Powered DGX SuperPOD for Generative AI Supercomputing at Trillion-Parameter Scale
nvidianews.nvidia.com
To view or add a comment, sign in
-
In the rapidly evolving landscape of GPU computing, it's not just about AMD, Intel, and NVIDIA anymore. The global GPU arms race is heating up, with nation-states joining the mix, driven by the quest for computing power and AI dominance. At Hydra Host, we are betting on data centers and, by extension, NVIDIA. As the competition for accelerators and GPUs intensifies, NVIDIA's stronghold on data centers will be pivotal. Explore how this battle for data center capacity and GPU computing is shaping the future of AI. Read more in our latest blog post. https://lnkd.in/eY9UaV86 #GPUWars #DataCenters #NVIDIA #AI #TechTrends #Geopolitics #HydraHost
Navigating the GPU Computing Wars | Hydra Host
hydrahost.com
To view or add a comment, sign in
-
Nvidia’s latest chip promises to boost AI’s speed and energy efficiency. What’s new: The market leader in AI chips announced the B100 and B200 graphics processing units (GPUs) designed to eclipse its in-demand H100 and H200 chips. The company will also offer systems that integrate two, eight, and 72 chips. How it works: The new chips are based on Blackwell, an updated chip architecture specialized for training and inferencing transformer models. Compared to Nvidia’s earlier Hopper architecture, used by H-series chips, Blackwell features hardware and firmware upgrades intended to cut the energy required for model training and inference. Training a 1.8-trillion-parameter model (the estimated size of OpenAI’s GPT-4 and Beijing Academy of Artificial Intelligence’s WuDao) would require 2,000 Blackwell GPUs using 4 megawatts of electricity, compared to 8,000 Hopper GPUs using 15 megawatts, the company said. Blackwell includes a second-generation Transformer Engine. While the first generation used 8 bits to process each neuron in a neural network, the new version can use as few as 4 bits, potentially doubling compute bandwidth. A dedicated engine devoted to reliability, availability, and serviceability monitors the chip to identify potential faults. Nvidia hopes the engine can reduce compute times by minimizing chip downtime. Nvidia doesn’t make it easy to compare the B200 with rival AMD’s top offering, the MI300X. Price and availability: The B200 will cost between $30,000 and $40,000, similar to the going rate for H100s today, Nvidia CEO Jensen Huang told CNBC. Nvidia did not specify when the chip would be available. Google, Amazon, and Microsoft stated intentions to offer Blackwell GPUs to their cloud customers. Behind the news: Demand for the H100 chip has been so intense that the chip has been difficult to find, driving some users to adopt alternatives such as AMD’s MI300X. Moreover, in 2022, the U.S. restricted the export of H100s and other advanced chips to China. The B200 also falls under the ban. Why it matters: Nvidia holds about 80 percent of the market for specialized AI chips. The new chips are primed to enable developers to continue pushing AI’s boundaries, training multi-trillion-parameter models and running more instances at once. We’re thinking: Cathie Wood, author of ARK Invest’s “Big Ideas 2024” report, estimated that training costs are falling at a very rapid 75 percent annually, around half due to algorithmic improvements and half due to compute hardware improvements. Nvidia’s progress paints an optimistic picture of further gains. It also signals the difficulty of trying to use model training to build a moat around a business. It’s not easy to maintain a lead if you spend $100 million on training and next year a competitor can replicate the effort for $25 million. Andrew Ng, [email protected]
To view or add a comment, sign in
-
Nvidia Unveils Next-Gen AI Engine: The B200 GPU Get ready for a revolution in AI performance! Nvidia just announced the B200 GPU, powered by the new Blackwell architecture. This powerhouse promises to crush previous limitations with significant boosts in speed and energy efficiency. Here's the lowdown: Blackwell Architecture: Designed specifically for training and running massive transformer models, the future of AI. Double the Efficiency: Compared to prior models, Blackwell cuts the energy needed for training by a whopping 80%! ⚡ Second-Gen Transformer Engine: Processes information with as few as 4 bits per neuron, doubling compute power. Unmatched Scalability: Connect up to 576 GPUs together for unprecedented performance. While pricing stays around the $30,000 mark (similar to current H100s), major cloud providers like Google, Amazon, and Microsoft are already lining up to offer access. This is a game-changer for developers pushing the boundaries of AI! link: https://lnkd.in/d3J6Cbv8 #AI #GPUs #Blackwell #Nvidia #MachineLearning #CloudComputing
All About Nvidia's New Blackwell Architecture and B200 GPU
deeplearning.ai
To view or add a comment, sign in
-
Nvidia Reveals Blackwell B200 GPU, the 'World's Most Powerful Chip' For AI: Sean Hollister reports via The Verge: Nvidia's must-have H100 AI chip made it a multitrillion-dollar company, one that may be worth more than Alphabet and Amazon, and competitors have been fighting to catch up. But perhaps Nvidia is about to extend its lead -- with the new Blackwell B200 GPU and GB200 "superchip." Nvidia says the new B200 GPU offers up to 20 petaflops of FP4 horsepower from its 208 billion transistors and that a GB200 that combines two of those GPUs with a single Grace CPU can offer 30 times the performance for LLM inference workloads while also potentially being substantially more efficient. It "reduces cost and energy consumption by up to 25x" over an H100, says Nvidia. Training a 1.8 trillion parameter model would have previously taken 8,000 Hopper GPUs and 15 megawatts of power, Nvidia claims. Today, Nvidia's CEO says 2,000 Blackwell GPUs can do it while consuming just four megawatts. On a GPT-3 LLM benchmark with 175 billion parameters, Nvidia says the GB200 has a somewhat more modest seven times the performance of an H100, and Nvidia says it offers 4x the training speed. Nvidia told journalists one of the key improvements is a second-gen transformer engine that doubles the compute, bandwidth, and model size by using four bits for each neuron instead of eight (thus, the 20 petaflops of FP4 I mentioned earlier). A second key difference only comes when you link up huge numbers of these GPUs: a next-gen NVLink switch that lets 576 GPUs talk to each other, with 1.8 terabytes per second of bidirectional bandwidth. That required Nvidia to build an entire new network switch chip, one with 50 billion transistors and some of its own onboard compute: 3.6 teraflops of FP8, says Nvidia. Further reading: Nvidia in Talks To Acquire AI Infrastructure Platform Run:ai Read more of this story at Slashdot.
To view or add a comment, sign in
70 followers
Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer
1moThe convergence of AI and data centers is truly revolutionizing our world, with applications spanning from healthcare to finance. Just look at OpenAI's GPT-4, powered by immense computing resources, demonstrating the transformative potential of this synergy. How can we ensure that this technological advancement benefits all of humanity, addressing ethical considerations and promoting equitable access?