Dihuni announced that it has started shipping optiReady GPU (graphics processing unit) servers and workstations designed for generative AI (artificial intelligence) and LLM (large language model) applications. These pre-configured systems are designed to make generative AI infrastructure selection simple and accelerate deployment from procurement to running applications.
Dihuni has enabled a suite of new GPU servers with an online configurator for customers to select GPU, CPU (central processing unit) and other configuration options. These GPU servers can be preloaded with operating system and AI packages including pytorch, tensorflow, keras etc. Servers can be purchased stand-alone or for larger deployments such as LLM and generative AI, Dihuni offers racked and cabled pods of high performance GPU clusters.
“New Generative AI applications require extreme performance GPU systems. We’re using our years of expertise, technologies, partnerships and supply-chain to help Generative AI software companies accelerate their application development. We have been helping customers in multiple verticals with their GPU server requirements and offer choice and flexibility from a system architecture and software standpoint to ensure we are delivering systems optimised for Generative AI applications.” says Pranay Prakash, chief executive officer at Dihuni.
The complete line of generative AI accelerated GPU servers allows flexibility for students, researchers, scientists, architects and designers to select systems that can be sized correctly and optimised for their AI and HPC (high performance computing) applications.
More info on servers featuring recent GPUs can be found by visiting here.
Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow