AMD reveals core specs for Instinct MI355X CDNA4 AI accelerator — slated for shipping in the second half of 2025 🤩 AMD Instinct MI325X has officially launched #AMD provided more details on its upcoming Instinct MI350 CDNA4 #AI accelerator and data center GPU today, formally announcing the Instinct MI355X. It also provided additional details on the now-shipping MI325X, which apparently received a slight trim on #memory capacity since the last time AMD discussed it. MI355X is slated to begin shipping in the second half of 2025, so it's still a ways off. However, AMD has seen massive adoption of its AI accelerators in recent years, with the MI300 series being the fastest product ramp in AMD's history, so like NVIDIA, it is now on a yearly cadence for product launches. AMD is presenting the MI355X as a "preview" of what will come, that means some of the final specifications could change. It will support up to 288GB of HBM3E memory, presumably across eight stacks. AMD said it will feature 10 "#compute elements" per #GPU, which really doesn't tell us much about the potential on its own, but AMD did provide some other initial specifications. A big thank you to Jarred Walton and Tom's Hardware for the full article with more background and insights via the link below 💡🙏👇 https://lnkd.in/e6z3c_RF #semiconductorindustry #semiconductors #semiconductor #chip #it #datacenter #tsmc #chips #innovation #technology #chiplet #tech #technology #computer #server #computer #taiwan #usa #china #ic
Marco Mezger’s Post
More Relevant Posts
-
👉 Intel's Power Play: New CPUs and AI Accelerator to Rival NVIDIA ✍Intel is stepping up its game in the AI and high-performance computing (HPC) arena with the launch of its new 𝐗𝐞𝐨𝐧 𝟔 𝐂𝐏𝐔 and 𝐆𝐚𝐮𝐝𝐢 𝟑 𝐀𝐈 𝐚𝐜𝐜𝐞𝐥𝐞𝐫𝐚𝐭𝐨𝐫. ➧Xeon 6 CPU: This powerhouse offers double the performance of its predecessor, making it a formidable choice for AI and HPC workloads. ➧Gaudi 3 AI Accelerator: This cutting-edge chip is specifically designed to handle large-scale generative AI applications, such as creating text or images. With these new offerings, Intel is clearly aiming to compete with NVIDIA, especially amidst rumors of a potential takeover. These moves demonstrate Intel's commitment to staying at the forefront of AI technology. #Intel #AI #HPC #Technology #Innovation #Xeon6 #Gaudi3
To view or add a comment, sign in
-
Is AMD Instinct™ MI300X affected by CVE-2023-4968 (GPU memory leak). AMD has the answer. Official announcement on May 7, 2024. This article was published on May 21, 2024. Preface: When I see the vulnerability it shows the date far away from now. Sometimes I lose interest. Maybe I'm missing a major technical detail. AMD officially released CVE-2023-4869 on March 7, 2024. It happened to wake me up! Although today is May 21, 2024, it seems that my study is not late! Background: Is MI300X better than H100? While both GPUs are capable, the MI300X has the edge in memory-intensive tasks like rendering large scenes and simulations. In comparison, the H100 excels in its AI-enhanced workflow and ray-traced rendering performance. AMD Instinct™ MI300X accelerators are designed to deliver leadership performance for Generative AI workloads and HPC applications. Vulnerability details: Insufficient clearing of GPU memory could allow a compromised GPU kernel to read local memory values from another kernel across user or application boundaries leading to loss of confidentiality. Official announcement: Please refer to the link for details – https://lnkd.in/geEH_jSp
To view or add a comment, sign in
-
Intel Unveils New Chips to Compete with AMD, NVIDIA, and Qualcomm Intel has recently made notable strides in the semiconductor industry with the unveiling of its latest chip generation. This announcement marks a significant milestone for the company as it seeks to bolster its position in an increasingly competitive landscape dominated by industry giants such as AMD, NVIDIA, and Qualcomm.... Find out more on: https://lnkd.in/dxhx_Ssy #StudyTime #SmartLearning #QuckLearn #DailyLearning #LearnEveryday #LearningMadeEasy #StudyShorts #EduShorts #LearnWithMe #LearningIsFun #BrainTeasers #OnlineClasses #OnlineLearning #Tech #Technology #TechReview #Gadgets #TechNews #TechTips #TechTalk #TechTrends #TechVideos #Innovation #TechCommunity #TechLife #FutureTech #TechReviews #GadgetReview #ai #deeplearning
To view or add a comment, sign in
-
SK hynix preps for NVIDIA #Blackwell Ultra and AMD Instinct MI325X with 12-Hi #HBM3E 🤩 Mass production has been achieved ahead of its competitors 💪 #SKhynix has started mass production of its 12-Hi HBM3E #memory stacks, ahead of its rivals. The new modules feature a 36GB capacity and set the stage for next-generation #AI and #HPC #processors, such as #AMD's Instinct MI325X which is due in the fourth quarter, and #Nvidia's Blackwell Ultra which is expected to arrive in the second half of next year. SK hynix's 12-Hi 36GB HBM3E stacks pack twelve 3GB #DRAM layers and feature a data transfer rate of 9.6 GT/s, thus providing a peak bandwidth of 1.22 TB/s per module. A memory subsystem featuring eight of the company's 12-Hi 36GB HBM3E stacks will thus offer a peak bandwidth of 9.83 TB/s. Real-world products will unlikely use these HBM3E memory devices at their full speed as developers tend to ensure ultimate reliability. We don't doubt that HBM3E memory subsystems will offer higher performance than their predecessors, though. Despite packing 50% more memory devices, the new 12-Hi HBM3E memory stacks feature the same z-height as their 8-Hi predecessors. To achieve this, SK hynix made DRAM devices 40% thinner. Thanks again to Anton Shilov and Tom's Hardware for the full article with more background and insights via the link below 🙏💡👇 https://lnkd.in/ezDksuGV #semiconductorindustry #semiconductormanufacturing #southkorea #it #datacenter #computing #server #computer #technology #tech #ic
To view or add a comment, sign in
-
At #Computex 2024, #AMD introduced the Instinct MI325X #GPU, featuring significant upgrades over its predecessor, the MI300X, and even Nvidia’s Hopper H200. The #MI325X boasts 288 GB of HBM3e memory with 6.0 TBps bandwidth, compared to the MI300X's 192 GB and 5.3 TBps. It can handle a 1-trillion-parameter model, double what the H200 can manage, with a peak theoretical throughput of 2.6 petaflops for FP8 and 1.3 petaflops for FP16. The MI325X utilizes AMD's #CDNA 3 architecture, designed for data center #AI and high-performance computing. AMD announced future plans for the Instinct line, including the MI350 series in 2025 with a 35x AI inference performance increase and the MI400 series in 2026 based on CDNA “Next” architecture. The MI350X will feature the same 288 GB HBM3e memory but use a #3nm manufacturing process and introduce FP4 and FP6 floating point formats. AMD's Instinct GPUs are gaining traction with partners like #Microsoft Azure, #Dell Technologies, #Supermicro, #Lenovo, and #HPE. https://lnkd.in/gdRa_U2J
AMD updates Instinct data center GPU line
networkworld.com
To view or add a comment, sign in
-
Hey everyone, We’re excited to announce the release of our latest AI visual inference samples (https://lnkd.in/eJ6A96px). The highlight of this release is the introduction of heterogeneous pipelines that leverage Intel® Xeon® AI capabilities in combination with hardware media processing. Here’s how it works (see illustration): - take Intel® Xeon® Processors equipped with Intel® discrete GPU card(s) - execute decoding and preprocessing on HW accelerated media engines from Intel® Data Center GPU Flex Series (or Intel® Arc™) - run AI inference on Intel® Xeon® CPUs with AMX (Intel® Advanced Matrix Extensions) instructions support. This approach allows you to fully utilize media GPU engines, achieving significant speed-ups compared to GPU-only solutions. We particularly recommend the Intel® Data Center GPU Flex 140 Series, as it features two GPUs per card and a total of four hardware media engines. This setup provides better balancing with Intel® Xeon® Processors. The number of GPU cards per host will depend on your Intel® Xeon® CPU model. High-end versions may require several cards to fully balance AI capabilities for media analytics models like Resnet50. All new heterogeneous pipelines have the postfix "GPU_CPU." You can find them along with others in the folder samples/openvino. #IAmIntel #openvino #intel #ai #deeplearning
To view or add a comment, sign in
-
AMD has launched its groundbreaking Zen 5 Ryzen processors, marking a significant leap in CPU technology for next-generation AI PCs. The new AMD Ryzen AI 300 Series processors feature the world’s most powerful Neural Processing Unit (NPU), paving the way for AI-enhanced computing on laptops. Additionally, AMD is introducing the next-gen AMD Ryzen 9000 Series processors for desktops, reaffirming its leadership in performance and efficiency for gamers and content creators. Zen 5 processors promise improved performance and efficiency, essential for handling AI workloads requiring intensive computational power. These processors include enhanced AI capabilities, with dedicated AI acceleration hardware like NPUs, accelerating AI inference and training tasks for faster and more efficient execution of AI algorithms. Optimized for AI workloads, Zen 5 processors offer improved vector processing capabilities, enhanced support for AI frameworks and libraries, and seamless integration with AI software tools. This launch cements AMD's position in the CPU market, enabling it to compete robustly against Intel and Nvidia in the AI PCs and data center solutions market. #AMD #Zen5 #Ryzen #NextGenAI #NeuralProcessing #CPUTechnology #AIComputing #TechInnovation #RyzenAI300 #PerformanceEfficiency
To view or add a comment, sign in
-
I was watching a great event yesterday, referring of course to AMD #AdvancingAI event in San Francisco! Here are some key highlights I captured👇 -New AMD #Instinct MI325X accelerators -Preview of next-gen Instinct MI350 series -ROCm 6.2 software stack with enhanced AI features -Out-of-the-box support on popular models - covering 1M+ models -World-class training and inference performance -Now Silo AI solves the last mile - building AI on AMD compute platforms Announcement of fifth-generation #EPYC #CPUs, formerly known as 'Turin'. The world's best server CPU for enterprise, AI and cloud ...the AMD 5th Gen EPYC processor. ✅ Up to 4X faster time to results on business applications such as video transcoding. ✅ Up to 3.9X the time to insights for science and HPC applications that solve the world’s most challenging problems. ✅ Up to 1.6X the performance per core in virtualized infrastructure. ✅ Finally, the purpose-built AI host node CPU, the EPYC 9575F, can use its 5GHz max frequency boost to help a 1,000 node AI cluster drive up to 1.6 million more tokens per second. Dr. Lisa Su, AMD's CEO, emphasized the company's commitment to AI: "We're focused on delivering the high-performance hardware and software solutions needed to power the next generation of AI." #AMD #AdvancingAI #ArtificialIntelligence #TechInnovation #AIHardware #TogetherWeAdvance #TogetherWeAdvance_AI #AIforpeople Lisa Su Vamsi Boppana Peter Sarlin Jens Stapelfeldt You can watch the event on Youtube: https://lnkd.in/eSgEFtNG
To view or add a comment, sign in
-
🔴💻 AMD: The Company with All Three Pieces for AI PCs! 💻🔴 🏆 Unmatched Capabilities: "We're probably the only company that has all the pieces," says Jack Huynh, SVP at AMD, emphasizing our unique combination of CPUs, GPUs, and NPUs. 💡 Innovation at Its Best: The latest AMD Ryzen AI 300 Series processors and our unified software architecture for NPUs are proof of our commitment to advancing AI technology. 🔗 Strategic Partnerships: Collaborating with industry leaders like Adobe, Zoom, and Microsoft to shape the future of AI PCs and meet the needs of developers. 🎯 Focused Vision: Over-investing in software to accelerate development and ensure our AI PCs deliver top performance and security. 🛠️ Educational Journey: Emphasizing the importance of educating OEMs and customers about the transformative potential of AI PCs. 👀 Read more here: https://lnkd.in/g4dJWHCU #AMDAI #AIPC #Innovation #TechLeadership #RyzenAI
"We're probably the only company that has all the pieces" - AMD on why it is ahead of rivals Nvidia and Intel when it comes to AI PCs
techradar.com
To view or add a comment, sign in
-
SAMSUNG bags an order from NVIDIA for AI CHIPS!!👏👏👏👏 . . . . SAMSUNG has bagged the order from NVIDIA for its packaging technology i.e. 2.5D I cube packaging technology. The packaging technology allows multiple CPU GPU, NPU and several high bandwidth memory die to be stacked horizontally on top of a silicon interposer. This enables multiple dies to act as a CHIP. SAMSUNG said earlier that it would be used in High-performance CHIPS such as 5G, AI and large data centres. This will help SAMSUNG bag more orders from AI CHIP companies for its 2.5D I cube packaging technology. Thoughts?? #semiconductors #samsung #nvidia #chips #vlsi #embeddedsystems
To view or add a comment, sign in