DATACENTER SOLUTIONS
MEET GUSTAV EDGE IN DATACENTER
Efficiency at the top
We setup two data centers with our low energy GUSTAV Quadro modules in Germany, Düsseldorf and Berlin, Germany. From now on our industrial customers can use AI based services for indexing, storage and testing. Since September, we also support cloudflare integration.
Our low-power hardware for any machine learning task. This new technology consists of powerful hardware that includes an integrated multiple graphics cards and low energy ARM CPU cores. These components are ideal for the next computing center of the future.
With over 10 years of experience, we now offer these powerful modules to enterprises and system integrators. Our modules are perfect to run production based AI-based models with the lowest power consumption in the cloud.
During the development phase, we concentrated on accommodating as many instances in the smallest possible space. We can fit two modules in a half-inch in the vertical direction. In-depth, we can extend the modules up to 1 meter.
We find much more space in a device and can thus optimize capacity in the data center than would be possible with standard servers.
The result is a more efficient use of the rack within the data center. On the low power consumption, we generate less waste heat and thus less CO2 emissions.
Per CPU and GPU modules we are between 3 – 30-watts power consumption. In the large modules with ARM64 CPU and GPU, which have up to 32 GB RAM, we consume from 10 – 50 watts. The power can be adjusted in real-time per module by the customer without rebooting. With the focus on artificial intelligence, it is even possible to increase or reduce the neural networks during runtime.
All our modules are based on ARM64 with embedded GPU´s. The modules are delivered in half inch format. This saves additional space in the rack. Each module is always delivered with an internal SSD (solid state disk), optionally between 256 and 2048 gigabytes.
Optimus is installed in datacenters in Europe. If you want to learn more about our solutions, you can download the following presentation
The smallest space for the highest efficiency is the goal of our openrack system which can be equipped with hundreds of small AI nodes. From the smallest manifold, our hyperwiser takes the next step to make AI applications scalable.
Each rack module can host multiple submodules. The performance classes are divided into different modules.
Over 100 power GPU´s, all controllable over our hypervisor
You must be logged in to post a comment.