The AI industry’s biggest players are putting all their chips on the table ♣ Armada was recently featured in The Wall Street Journal showcasing AI’s impact on growing demand for data centers. This year we saw the unprecedented rise of AI leaving companies scrambling to set up data centers and fight over the dwindling supply of Nvidia chips. During this time many companies have looked to solutions like Armada, the world's first full-stack edge computing platform. Armada brings connectivity, compute, and AI to wherever your edge may be. It’s been a blast working at Armada and getting to hear how other technology leaders are tackling this issue. Can’t wait to see where the AI and Computing journey takes us next 🚀 https://lnkd.in/gcj9gpVv
Crosby Schultz’s Post
More Relevant Posts
-
This WSJ article perfectly summarizes the many roadblocks to deploying complex AI systems that go beyond AI chip shortages. Building enough data centers to meet the growing demand for AI applications requires reliable power supply, affordable real estate, custom cooling systems, and low latency connectivity. This is exactly the problem that Armada is trying to solve through our modular data center, remote connectivity, and AI marketplace. Full stack edge compute + Starlink + AI enables cutting edge AI applications while circumventing these bottlenecks. From the article: "Armada, a San Francisco startup, builds data centers inside of shipping containers. The company can drop these portable facilities, full of Nvidia chip-powered servers, in locations such as remote areas of Texas or Africa that are near inexpensive sources of power like gas wells."
Why the AI Industry’s Thirst for New Data Centers Can’t Be Satisfied
wsj.com
To view or add a comment, sign in
-
Thursday School : Keep your Eyes on the Horizon(tal). Building upon last week’s post, at MWC the AI-RAN Alliance was launched. Their 3 focus areas are AI-on-RAN, AI-for-RAN and AI-and-RAN. Founder members Softbank and nvidia have shared a neat 1-minute video on how this can deliver – through combined vRAN/AI stacks delivered at Regional Data-Centres - low latency edge services delivered over pooled infrastructure which minimises infrastructure and maximises pooling gain! Of course this is predicated on the ability to run vRAN (including vCU And vDU centralisasion and indeed vUPF and other applications) and AI on common infrastructure. Peaks in RAN demand can be met without increasing the amount of under-utilised compute during network quiet hours, as more CPU and GPU resource is simply diverted to AI computation. Such an approach also generates a relatively constant heat load. This in turn maximises the viability of finding a Heat Export partner (e.g. heating a Care Home) which means that carbon is not consumed for heating the recipient, nor is carbon consumed in the cooling of the AI-and-RAN stack. As part of a multi-Cloud approach, this will significantly reduce the compute requirements of Mobile Networks, improving costs and Scope 1,2 and 3. The only argument is whether we call it ‘avoided Scope 2&3’ or Scope 4! #EveryDaysaSchoolDay #Telecommunications #AI Previous Post: https://lnkd.in/eC_UbM9F
SoftBank Redefines Regional Data Center for AI and 5G
https://www.youtube.com/
To view or add a comment, sign in
-
wsj.com: The frenzy to build data centers to serve the exploding demand for artificial intelligence is causing a shortage of the parts, property and power that the sprawling warehouses of supercomputers require. The lead time to get custom cooling systems is five times longer than a few years ago, data center executives say. Delivery times for backup generators have gone from as little as a month to as long as two years. A dearth of inexpensive real estate with easy access to sufficient power and data connectivity has builders scouring the globe and getting creative. New data centers are planned next to a volcano in El Salvador and inside shipping containers parked in West Texas and Africa. Earlier this year, data-center operator Hydra Host found itself in a bind, searching for 15 megawatts of power needed to operate a planned facility with 10,000 AI chips. The company went from Phoenix to Houston to Kansas City, Mo., to New York to North Carolina to find the right space. It is still on the hunt. The locations that had the power didn’t have the right cooling systems required to keep the servers operational. New cooling systems would take six to eight months to arrive, thanks to a supply crunch. Meanwhile, buildings that had the cooling didn’t have the transformers required to receive the additional power—those would take up to a year to arrive. “With what we’re seeing, the fervor to build is probably the greatest since the first dot-com wave,” said Hydra Host Chief Executive Aaron Ginn. He said the search for the right parts and space has taken months longer than expected. The demand for computational power to create AI systems has surged since late 2022, when OpenAI’s ChatGPT started showing the technology’s potential. Demand for computer servers equipped with new generations of AI chips—the most popular of which are graphics processing units, or GPUs, from Nvidia—is overwhelming existing data centers.
Why the AI Industry’s Thirst for New Data Centers Can’t Be Satisfied
wsj.com
To view or add a comment, sign in
-
Modern #AI models can strangle the bandwidth of many traditional data centers. Get an AI-optimized data center. Watch this video the NVIDIA Spectrum-X Networking Platform to see how.
NVIDIA Spectrum-X Platform: World's First Ethernet Fabric Built for AI
eticloud.lll-ll.com
To view or add a comment, sign in
-
Register Now for Our Expert Webinar, The AI Revolution: Transforming Data Center Capacity Join Wesco and Panduit for our expert webinar, The AI Revolution: Transforming Data Center Capacity. This BICSI-accredited expert webinar will be held on July 25 at 11 AM EST and presented by Bob Wagner, Senior Business Development Manager at Panduit. Just a few years ago, few people paid attention to artificial intelligence (AI), but it’s now a priority for data center professionals. Not only has the demand for AI skyrocketed, but the requirements needed to house an AI system have increased to the point that even newer data centers may need major overhauls. This webinar will focus on what has changed and how data center operators can prepare for the AI revolution. Topics covered will include: AI basics Latest GPU offering from Nvidia Power requirements and future expectations Cooling requirements – when do you go to liquid? Changes to network architecture When to use DAC, ACC/AEC, AOC or passive fiber Infiniband or Ethernet Attend and earn 1 BISCI credit! *REGISTER* https://lnkd.in/ghfJSfHq
To view or add a comment, sign in
-
-
Trying to get your NVIDIA architecture stood up quickly based on Grace, Hopper, and storage that is optimized for these technologies? Have you heard of DDN Storage? #HPC #AI #NVIDIA #gpucomputing #DDNstorage #DGX #Hopper #H100 #fsi
Watch as Marc Hamilton, VP of Solutions Architecture at NVIDIA, introduces new systems based on DGX H100, Grace, and Hopper supercomputers, as well as exciting initiatives in energy efficiency and quantum frameworks. Don't miss out on this informative session that showcases the future of data center innovation: https://bit.ly/3OZg7MW #AI #DataSolutions #HPC #NVIDIA #DGX #GPU
On Demand: DDN Data, HPC & AI Summit EMEA 2023 | DE
https://www.ddn.com
To view or add a comment, sign in
-
According to Microsoft research, "a small loss rate (e.g., 0.1%) along the transmission path can lead to dramatic RDMA throughput degradation (e.g., <∼60%)" This will happen in a non-scheduled fabric with unpredictable impact caused by a network failure. An alternative solution is a fully scheduled AI Networking Fabric that is based on next-gen routers: Distributed Disaggregated Chassis (DDC) design. Read more about this in DriveNets white paper (Microsoft source: https://lnkd.in/dXGt_mdR)
Why Ethernet for AI Fabric? - DriveNets
drivenets.com
To view or add a comment, sign in
-
AI data centers differ from conventional data centers by housing servers that leverage AI chips, such as NVIDIA's GPUs, capable of running multiple computations simultaneously, necessitating added infrastructure and alternative cooling methods, like liquid cooling systems, to prevent overheating. These purpose-built data centers require significant capital and time investments for construction or retrofitting. The global AI infrastructure market, including data centers and related hardware, is projected to reach $422.55 billion by 2029, with a compound annual growth rate of 44% over the next six years, according to research firm Data Bridge Market Research.
AI-Ready Data Centers Are Poised for Fast Growth
wsj.com
To view or add a comment, sign in
-
Modern AI models can strangle the bandwidth of many traditional data centers. You need an AI-optimized data center. Watch this video highlighting the NVIDIA Spectrum-X Networking Platform to see how.
NVIDIA Spectrum-X Platform: World's First Ethernet Fabric Built for AI
quartermaster.lll-ll.com
To view or add a comment, sign in
-
Modern AI models can strangle the bandwidth of many traditional data centers. You need an AI-optimized data center. Watch this video highlighting the NVIDIA Spectrum-X Networking Platform to see how.
NVIDIA Spectrum-X Platform: World's First Ethernet Fabric Built for AI
omnibusiness2018.lll-ll.com
To view or add a comment, sign in