Nvidia’s latest chip promises to boost AI’s speed and energy efficiency. What’s new: The market leader in AI chips announced the B100 and B200 graphics processing units (GPUs) designed to eclipse its in-demand H100 and H200 chips. The company will also offer systems that integrate two, eight, and 72 chips. How it works: The new chips are based on Blackwell, an updated chip architecture specialized for training and inferencing transformer models. Compared to Nvidia’s earlier Hopper architecture, used by H-series chips, Blackwell features hardware and firmware upgrades intended to cut the energy required for model training and inference. Training a 1.8-trillion-parameter model (the estimated size of OpenAI’s GPT-4 and Beijing Academy of Artificial Intelligence’s WuDao) would require 2,000 Blackwell GPUs using 4 megawatts of electricity, compared to 8,000 Hopper GPUs using 15 megawatts, the company said. Blackwell includes a second-generation Transformer Engine. While the first generation used 8 bits to process each neuron in a neural network, the new version can use as few as 4 bits, potentially doubling compute bandwidth. A dedicated engine devoted to reliability, availability, and serviceability monitors the chip to identify potential faults. Nvidia hopes the engine can reduce compute times by minimizing chip downtime. Nvidia doesn’t make it easy to compare the B200 with rival AMD’s top offering, the MI300X. Price and availability: The B200 will cost between $30,000 and $40,000, similar to the going rate for H100s today, Nvidia CEO Jensen Huang told CNBC. Nvidia did not specify when the chip would be available. Google, Amazon, and Microsoft stated intentions to offer Blackwell GPUs to their cloud customers. Behind the news: Demand for the H100 chip has been so intense that the chip has been difficult to find, driving some users to adopt alternatives such as AMD’s MI300X. Moreover, in 2022, the U.S. restricted the export of H100s and other advanced chips to China. The B200 also falls under the ban. Why it matters: Nvidia holds about 80 percent of the market for specialized AI chips. The new chips are primed to enable developers to continue pushing AI’s boundaries, training multi-trillion-parameter models and running more instances at once. We’re thinking: Cathie Wood, author of ARK Invest’s “Big Ideas 2024” report, estimated that training costs are falling at a very rapid 75 percent annually, around half due to algorithmic improvements and half due to compute hardware improvements. Nvidia’s progress paints an optimistic picture of further gains. It also signals the difficulty of trying to use model training to build a moat around a business. It’s not easy to maintain a lead if you spend $100 million on training and next year a competitor can replicate the effort for $25 million. Andrew Ng, [email protected]
Fernando Cormenzana’s Post
More Relevant Posts
-
The Nvidia B200 Blackwell GPU: Powering the Next Generation of AI The Nvidia B200 Blackwell GPU is a powerful tool designed to tackle demanding AI workloads, including chatbots, real-time conversation, and AI-generated media. This chip is 30 times faster than its predecessor, the Nvidia H100, and supports up to 1,800 tokens per second, allowing it to manage 60 simultaneous conversations in real-time. However, bringing AI to large-scale consumer applications requires immense compute power and infrastructure, making it clear that AI won't take over the world overnight. How Many B200s Can Fit in a Rack? Each data center rack can accommodate up to 18 B200 GPUs, connected using Nvidia’s NVLink technology for high-speed communication. These connections allow the GPUs to perform complex AI tasks at high speed, ensuring that every GPU in the rack can talk to the others at 1.8 TB/s. Power and Cooling Requirements Power Usage Each B200 GPU draws 1,000 watts of power, so a fully loaded rack consumes 18 kW. This is comparable to powering several homes continuously. Scaling AI models to global levels means data centers need to generate enough power and ensure stability to keep these GPUs running 24/7. Cooling Infrastructure The B200 GPUs generate significant heat. To prevent overheating, these racks rely on liquid cooling systems that circulate coolant through the hardware. Data centers must implement advanced cooling techniques to manage the 120 kW of cooling capacity needed per rack, adding both complexity and cost to deployments Scaling AI with B200: A Challenge in Infrastructure If we scale this hardware for 35% of the global population—or about 2.8 billion users—it will require extensive data centers. Each rack holding 18 GPUs would support 1,080 simultaneous conversations, but scaling AI to meet global demand would still need hundreds of thousands of these racks, each consuming substantial energy. Example: Conversational AI: Supporting conversational AI for billions of users would require 1.4 million GPUs. Video and Content Generation: AI media applications could need up to 1 million GPUs. Overall Requirements: Deploying AI systems for everyday use could demand 6 million GPUs in total. Conclusion The Nvidia B200 Blackwell GPU pushes AI infrastructure forward, allowing for faster chatbots, smarter assistants, and real-time media creation. However, scaling AI for everyday consumer use faces real barriers, including power limitations, space, and cooling challenges. With a shortage of GPUs, expensive infrastructure, and high energy consumption, AI’s growth will be steady, not instant. While the B200 is a significant leap, the challenges around power, cooling, and supply chains mean that we’re far from AI taking over the world overnight. As companies innovate and optimize infrastructure, we can expect gradual—but powerful—progress in AI applications.
To view or add a comment, sign in
-
👋 Jan now supports NVIDIA’s TensorRT-LLM in addition to llama.cpp, making Jan multi-engine and ultra-fast for users with NVIDIA GPUs. We've done a performance benchmark of TensorRT-LLM on consumer-grade GPUs, which shows pretty incredible speed ups (30-70%) on the same hardware. First off, what is TensorRT-LLM? Running AI models like Llama3 and Mistral requires you to compile the models to "hardware language". This job is done by Inference Engines, more commonly referred to as "backends". Note: "Inference" is a fancy way of saying "we get your LLM to generate a reply" Popular inference engines include: - llama.cpp (most popular, dominates desktop AI) - MLX (from Apple) - TensorRT (from NVIDIA) TensorRT-LLM is NVIDIA’s relatively new and (somewhat) open source Inference Engine, which uses NVIDIA’s proprietary optimizations beyond the open source cuBLAS library. It works by optimizing and compiling the model specifically for your GPU, and highly optimizing things at the CUDA level to fully take advantage of every bit of hardware: - CUDA cores - Tensor cores - VRAM - Memory Bandwidth https://lnkd.in/gQ9e-QwX TensorRT-LLM takes a different approach from Llama.cpp, which dominates desktop inference with a “compile-once, run anywhere” approach. A good analogy is C++ vs. Java: - C++ or TensorRT-LLM Blazing fast, but runs only on machine for which it was compiled -Java or Llama.cpp Single file that can run cross-platform Both approaches are needed for open source AI to flourish. So 👋 Jan supports both! We benchmarked TensorRT-LLM on consumer-grade devices, and managed to get Mistral 7b up to: 170 tokens/s on Desktop GPUs (e.g. 4090, 3090s) 51 tokens/s on Laptop GPUs (e.g. 4070) TensorRT-LLM was 30-70% faster than llama.cpp on the same hardware, …and at least 500% faster than just using the CPU 😂 Interestingly, we found that TensorRT-LLM didn’t use much resources, completely opposite to its reputation as needing beefy hardware to run: - Used 10% more VRAM (marginal) - Used… less RAM??? Note: our RAM measurements were highly iffy, and we’d love if anyone had better ideas on how to measure it. https://lnkd.in/gpUk3pht Jan still ships with our much beloved llama.cpp as our default inference engine (shout out to Georgi Gerganov and the ggml team). TensorRT-LLM is available as an extension, which will download additional dependencies. We've also compiled a few models, and will make more available soon. Read the full benchmark: https://lnkd.in/gzd-fHT8 Special thanks to Aslı Sabancı Demiröz, Annamalai Chockalingam, Jordan Dodge from NVIDIA, and Georgi Gerganov from llama.cpp for feedback, review and suggestions.
To view or add a comment, sign in
-
NVIDIA H200 GPUs Crush MLPerf’s LLM Inferencing Benchmark Perhaps to the surprise of few, the next generation of NVIDIA GPUs dominated the latest MLPerf performance benchmarking tests for artificial intelligence (AI) workloads. While the takeaway is obvious — use the latest NVIDIA GPUs in your large-scale AI systems if you can afford to do so. But given the flexibility of the tests, other vendors, notably VMware and Red Hat, demonstrated some results that would interest AI system builders as well. Managed by the MLCommons AI engineering consortium, this latest round of the MLPerf performance benchmarking (version 4) includes for the first time two new tests to approximate the inferencing workloads for generative AI applications. This round includes a test for large language model (LLM) performance, using Hugging Face’s 70 billion parameter Llama2-70B. Big Boi! The Llama is an order of magnitude larger than the last model added to the suite, Hugging Face‘s GPT-J in version 3.1. About 24,000 samples from the Open Orca dataset were used for the sample data. How the MLPerf benchmark for inferencing has grown over the years. The benchmark also includes for the first time Stability AI’s Stable Diffusion XL, with 2.6 billion parameters, to test the performance of a system that creates images from text, using metrics based on latency and throughput. How NVIDIA Swept the MLPerf Performance Tests For these speed tests, NVIDIA entered a number of configurations based on its soon-to-be-released NVIDIA H200 Tensor Core GPUs (built on the Nvidia Hopper architecture). The GPUs were augmented with NVIDIA TensorRT-LLM, which is software that streamlines LLM processing. In the benchmark tests, the H200 GPUs, along with, TensorRT-LLM, were able to produce up to 31,000 tokens/second, setting a record to beat in this first round of LLM benchmarking. This next generation of GPUs showed roughly a 43% performance improvement over the currently available (though still in scarce supply) H100s, also tested for comparison. The chip company also used a “custom thermal solution” to keep the chips cooler, which added a 14% gain in performance. NVIDIA’s new GPUs showed nearly a 3x improvement since the last rounds of tests in September (NVIDIA). NVIDIA’s H200 GPUs, which will be available later this year, are equipped with 141GB of Micron high-bandwidth memory (HBM3e) running at 4.8TB/s throughput, a considerable jump over the 80GB and 3.35 TB/s, respectively, of the H100. “With HBM3e memory, a single H200 GPU can run an entire Llama 2 70B model with the highest throughput,” an NVIDIA blog post boasts. How Does MLCommons Speed-Test Inferencing? “How quickly hardware systems can run AI and ML models” in various configurations, is what the MLPerf Inference benchmark suite (current: v4.0) is designed to measure. With the LLM inference test, several “tokens” (such as a sentence or paragraph) are used as input, and speed is measured by how qui...
To view or add a comment, sign in
-
**Exciting News in AI and Computing: NVIDIA's Blackwell Architecture Unveiled!** We're witnessing a monumental leap in AI capabilities with NVIDIA's introduction of the Blackwell architecture. This cutting-edge technology is set to revolutionize generative AI and accelerated computing. Here's what you need to know: 🚀 **Blackwell GPUs**: At the heart of the architecture are the Blackwell GPUs, each packed with **208 billion transistors** and built using a custom **TSMC 4NP process**. These GPUs feature **two reticle-limited dies** connected by a **10 TB/s chip-to-chip interconnect**, functioning as a unified GPU. 💡 **GB200 NVL72**: The flagship model, the GB200 NVL72, is a marvel of engineering. It connects **36 dual-GPU Grace Blackwell "superchips"** (totaling **72 GPUs**) and **36 Grace CPUs** in a liquid-cooled, rack-scale design. This configuration acts as a single massive GPU, boasting **30X faster real-time inference** for trillion-parameter LLMs and supporting **13.5 TB of HBM3e memory**. 🧠 **Second-Generation Transformer Engine**: The Transformer Engine is enhanced with custom Blackwell Tensor Cores, enabling **4-bit floating point (FP4) AI** and fine-grain **micro-tensor scaling**. Coupled with NVIDIA TensorRT-LLM and NeMo Framework, it accelerates both inference and training for LLMs and MoE models. 🔒 **Security**: NVIDIA Confidential Computing ensures robust hardware-based security for sensitive data and AI models. 🔗 **NVLink and NVLink Switch**: The fifth-generation NVLink interconnect can scale up to **576 GPUs**, with the NVL72 configuration featuring a **72-GPU NVLink domain**. The NVLink Switch Chip provides an astounding **130TB/s of GPU bandwidth**, supporting NVIDIA SHARP™ FP8. 💲 **Pricing**: The GB200 NVL72 cabinet, with its 72 chips, is priced at a cool **$3 million**. This reflects the unparalleled performance and advanced technology that NVIDIA brings to the table. NVIDIA's Blackwell architecture is a game-changer for the AI and computing industries, offering unprecedented performance and efficiency. It's a testament to NVIDIA's commitment to innovation and leadership in the field. #NVIDIA #BlackwellArchitecture #AI #Computing #Innovation #Technology
To view or add a comment, sign in
-
Highlighting: "A single 8xSohu server is said to equal the performance of 160 H100 GPUs, meaning data processing centers can save both on initial and operational costs if the Sohu meets expectations." ----- Etched comes at NVidia creatively by focusing on transformer models. Could the Sohu chip reduce need for Nvidia A100 and H100 chips? ----- TomsHardware: "Sohu AI chip claimed to run models 20x faster and cheaper than Nvidia H100 GPUs. Startup Etched has created this LLM-tuned transformer ASIC." (Jowi Morales) (June 26, 2024) "Etched, a startup that builds transformer-focused chips, just announced Sohu, an application-specific integrated circuit (ASIC) that claims to beat Nvidia’s H100 in terms of AI LLM inference. A single 8xSohu server is said to equal the performance of 160 H100 GPUs, meaning data processing centers can save both on initial and operational costs if the Sohu meets expectations. According to the company, current AI accelerators, whether CPUs or GPUs, are designed to work with different AI architectures. These differing frameworks and designs mean hardware must be able to support various models, like convolution neural networks, long short-term memory networks, state space models, and so on. Because these models are tuned to different architectures, most current AI chips allocate a large portion of their computing power to programmability. Most large language models (LLMs) use matrix multiplication for the majority of their compute tasks and Etched estimated that Nvidia’s H100 GPUs only use 3.3% percent of their transistors for this key task. This means that the remaining 96.7% silicon is used for other tasks, which are still essential for general-purpose AI chips. Etched made a huge bet on transformers a couple of years ago when it started the Sohu project. This chip bakes in the transformer architecture into the hardware, thus allowing it to allocate more transistors to AI compute. We can liken this with processors and graphics cards let’s say current AI chips are CPUs, which can do many different things, and then the transformer model is like the graphics demands of a game title. Sure, the CPU can still process these graphics demands, but it won’t do it as fast or as efficiently as a GPU. A GPU that’s specialized in processing visuals will make graphics rendering faster and more efficient. This is what Etched did with Sohu. Instead of making a chip that can accommodate every single AI architecture, it built one that only works with transformer models. The company’s gamble now looks like it is about to pay off, big time. Sohu’s launch could threaten Nvidia’s leadership in the AI space, especially if companies that exclusively use transformer models move to Sohu. After all, efficiency is the key to winning the AI race, and anyone who can run these models on the fastest, most affordable hardware will take the lead." TomsHardware: https://lnkd.in/g2ZGiU-z #ai #cloud #aicloud #cloudai #cloudgpu #genai #transformermodel
To view or add a comment, sign in
-
𝐆𝐏𝐔 = 𝐆𝐚𝐦𝐢𝐧𝐠 Isn’t that what most of us have thought for years? But did you know that these powerful chips are also driving innovation across industries you might never expect? Industries like medical imaging, financial modeling, automotive technology, and even space exploration. Computing has come a long way, and at the heart of this evolution are two key players: CPUs and GPUs. While CPUs are designed for handling sequential tasks efficiently, GPUs are the multitaskers, taking computing to a whole new level by processing thousands of tasks simultaneously. Chances are, you’ve seen this video of the Mythbusters, Adam Savage, and Jamie Hyneman, demonstrating this difference using paintball cannons. A single paintball cannon represents a CPU, shooting one paintball at a time — this is sequential processing in action. Now imagine 1,100 paintball cannons firing all at once; that’s your GPU, blasting through tasks with parallel processing. The difference is clear: while CPUs excel at detailed, step-by-step tasks, GPUs are built for scenarios that require handling massive amounts of data at lightning speed. Did you know the global GPU market size is calculated at $75.7 billion this year? And the world’s top chipmakers — AMD, Intel, and Nvidia — have already shipped 70 million GPUs in the first quarter. But how did we get here? Nvidia, the company that pioneered the modern GPU, was founded in 1993 with a vision to transform visual computing. At the time, computers were limited to sequential processing. Great for basic tasks but slow for anything requiring large-scale calculations. Nvidia's breakthrough was developing a chip capable of handling thousands of tasks at once, drastically improving computer performance, especially in gaming, scientific research, and later, AI. Fast forward to today, the company is valued at nearly $3 trillion, largely thanks to its innovations in GPUs that have drastically changed what computers can do. Over the years, we’ve seen several inflection points in computing: the explosion of the internet, the rapid growth of artificial intelligence, and the rise of GPUs. Each step brought new challenges that demanded more power, speed, and efficiency — and GPUs delivered every time. With the global GPU market projected to reach around $1,414.39 billion by 2034, it's clear that these powerful chips are becoming an essential component of our digital future. GPUs will keep us moving forward as AI and data demands keep rising. The applications are limitless: from accelerating and improving real-time healthcare diagnostics, to powering driverless cars that navigate with human-like precision, to facilitating advancements in climate change research. So, the question is no longer "what can GPUs do?" but rather "what can't they do?" Video source: NVIDIA #computing #AI #innovation #technology #GPU
To view or add a comment, sign in
-
This Data Center Stock Could Go Parabolic Following Nvidia’s Blackwell Launch - https://lnkd.in/d3_Zjdkf #cryptocurrency #bitcoin #news Early signs indicate that Nvidia's Blackwell GPUs are going to be a big winner for the company. Believe it or not, Nvidia was once a company primarily focused on the gaming market. But over the last two years, the company has emerged as the world's preeminent AI operation. How did that happen? Well, one of Nvidia's core products is an advanced chipset known as a graphics processing unit (GPU). GPUs are a critical piece of infrastructure in developing generative AI, and they've become a business worth tens of billions of dollars for Nvidia. Later this year, Nvidia is expected to release its most powerful GPU architecture yet -- known as Blackwell. While this will surely be a tailwind for the semiconductor darling, I see another tempting opportunity that's hiding in plain sight. Below, I'm going to break down how the Blackwell launch could make data center company Vertiv (VRT -2.19%) a lucrative choice for AI investors. How big is Blackwell going to be? It's hard to say for certain how big of a business Blackwell will become for Nvidia. But with that said, some early trends uncovered by industry research analysts are hinting that the release is going to be a big hit. Last month, Morgan Stanley noted that their forecasts indicate Blackwell could generate $10 billion in revenue just in Nvidia's fourth quarter. Shortly thereafter, Morgan Stanley analyst Joseph Moore issued a report stating that Blackwell GPUs are already sold out for the next 12 months. I guess Nvidia CEO Jensen Huang wasn't kidding when he said that demand for Blackwell is "insane." These tailwinds are undoubtedly a good sign for Nvidia. Below, I'm going to detail why Nvidia's success with Blackwell should parlay into a tremendous opportunity for Vertiv. Image source: Getty Images. Why Vertiv should benefit At their core, GPUs have the ability to process sophisticated programs and algorithms that help train machine learning applications or large language models (LLMs). While this might sound simple, GPUs are far more complex than running a software program on your laptop. IT architecture specialists such as Super Micro Computer or Dell Technologies help build the infrastructure that houses GPUs. Essentially, GPUs are integrated into clusters on server racks that sit inside data centers. Since the chipsets are constantly running programs and processing data, it's not surprising to learn that data centers consume high levels of energy and run the risk of overheating. Right now, data centers typically rely on air conditioning units, power generators, and fans to offset heat. However, as AI infrastructure spend continues to rise, data centers are going to need to identify more efficient ways to tame heat management. Thi
This Data Center Stock Could Go Parabolic Following Nvidia’s Blackwell Launch
https://igkstore.com
To view or add a comment, sign in
-
THE ONE THING YOU NEED TO KNOW ABOUT DATA THIS WEEK is what happened at Nvidia’s developers’ conference last week. With 16,000 attendees packed into purple-lit stadium seats, 300,000 watching virtually, a fifty-foot projection screen, and an opening act, this was more like a rock concert or a megachurch than a user conference for a frickin chip manufacturer. People are calling it “AI Woodstock.” Nvidia makes GPUs. A year ago you’d never heard of them. Now their market cap is $2.35 trillion--bigger than Amazon. And it did feel like a turning point. 1. You think of GPUs as hardware. Nvidia is making them into systems. I think of silicon chips like toys. Like if I placed a big order of GPUs from Nvidia, a truck would show up at my house and tip a zillion tiny plastic chips into my driveway like Legos. Obviously this is wrong. Chips are not something you could drop off. They need software. Code instructs the chips. Nvidia has been offering its CUDA software for years. CUDA links your program in your system off your CPU… to the GPU that Nvidia has just sold you. Sure, you could create your own software. But why bother? Nvidia’s is better than anything you could make, and it’s tailor-made for the hardware. And once you’re hooked on the software… now Nvidia can push any number of services to you. Hardware + software + services = systems. 2. This will accelerate AI adoption… the way cloud computing spread the use of big data. In the keynote, CEO Jensen Huang announced something called “microservices.” Micro-wha? They’ll ship you containers with “pre-trained models”... to connect you to the LLM of your choice… and use your own data… to generate… well, whatever you want. That means AI computing can be dialed up (heavy / custom) and down (light/off-the-shelf) according to the savvy of the user. Johnson & Johnson makes Band-Aids and Tylenol… but now they’re partnering with Nvidia for surgical analytics. Mercedes-Benz AG makes luxury sedans… now they do self-driving cars with Nvidia. The U.S. Department of Energy (DOE)’s Office of Science uses Nvidia to help their mission to create climate models. It’s like the birth of cloud computing ten years ago. Suddenly you didn’t a Level 3 data center to use your big data. You needed an AWS account and cash to subscribe. That pushed big data to organizations of every size. 3. What does this mean to us? -Cash, profit, and market power will pump into Nvidia. It’s all about those network effects. Razors and razor blades. -Look for innovation, anywhere--nonprofits! B-Corps!--to accelerate on the back of AI infrastructure-for-rent. -Look for tech-soft sectors (like consumer) to get savvier from using the new tools. -There will be a pull-forward effect on big data. The return on data just got higher… the investment lower (it’s just a rental now)… so more players will gather and use their data to advance their mission with AI. GET THE DATASTORY IN YOUR INBOX> https://lnkd.in/ecqMxbfY
To view or add a comment, sign in
-
The news from NVidia - Let me Perplexity that for you... The top announcements related to Nvidia from the GTC 2024 event include: 1. **Unveiling of Blackwell AI Platform**: Nvidia introduced the Blackwell platform, which includes the GB200 chip, touted as the "world's most powerful chip" for AI. The Blackwell GPU is expected to significantly enhance the capabilities and efficiency of AI applications, offering up to 30 times more inference throughput and four times better training performance than its predecessor, Hopper[1][2][5]. 2. **GB200 Grace Blackwell Superchip**: Nvidia announced the GB200 Grace Blackwell Superchip, which connects two NVIDIA B200 Tensor Core GPUs to the NVIDIA Grace CPU. This superchip is designed to deliver massive AI processing power and will be available on NVIDIA DGX™ Cloud[2]. 3. **Nvidia's Omniverse and Apple's VisionPro**: Nvidia is bringing its Omniverse platform to Apple's VisionPro virtual reality headset, enabling developers to create advanced 3D experiences and stream them to the VisionPro headset[2]. 4. **Project GR00T for Humanoid Robots**: Nvidia announced "Project GR00T," a foundation model for humanoid robots, aiming to create robots that can understand natural language and emulate human movements[2]. 5. **Blueprint for 'Next Gen' Data Centers**: Nvidia showcased a 3D blueprint for next-generation AI data centers, emphasizing performance, energy efficiency, and scalability[2]. 6. **Expansion of Nvidia and Oracle's Sovereign AI Partnership**: The collaboration between Nvidia and Oracle is expanding to deliver sovereign AI solutions globally, which refers to a nation's ability to produce AI using its own resources[2]. 7. **Nvidia Inference Microservice (NIM)**: Nvidia introduced NIM, a revenue-generating software tool designed to streamline the use of older Nvidia GPUs for AI inference tasks[1]. 8. **New Software Tools for Developers**: Nvidia released a suite of new software tools to simplify the process for developers to market AI, including those that enhance the utilization of Nvidia GPUs[1]. 9. **DGX SuperPOD and GB200 Systems**: Nvidia revealed new versions of its DGX AI systems and SuperPOD rack-scale architecture, which promise significant increases in inference performance and training speed[5]. 10. **6G Research and Omniverse Cloud APIs**: Nvidia announced its involvement in 6G research and the availability of Omniverse Cloud APIs, which will help integrate Omniverse technologies into existing design and automation software[5]. 11. **Thor SoC in the Automotive Industry**: Nvidia's Drive Thor system-on-chip (SoC) will be used by several automakers for their future electric vehicle fleets, offering high performance and system cost reduction[5]. 12. **Generative AI for RTX PCs and Workstations**: Nvidia announced GeForce RTX™ SUPER desktop GPUs for enhanced generative AI performance and new AI laptops from top manufacturers, along with new RTX™-accelerated AI software and tools[6].
10 analysts discuss Nvidia stock after Blackwell, new AI chip announcements By Investing.com
investing.com
To view or add a comment, sign in
Exciting advancements in AI chips technology! Looking forward to seeing the impact.