Bare Metal Pod: Genesis

Today, we’re going to embark on a journey of discovery, and unveil our latest product: Bare Metal Pod.

You know us for the services we provide: bare metal servers, managed and unmanaged virtualisation platform, our 40+ public cloud services, domain names and telco.

This is just the tip of the iceberg, and to understand why we built and now offer Bare Metal Pod, we have to dig deeper.

So let’s begin this journey exploring the origins of Bare Metal Pod, and in later articles we’ll cover the more technical details—there’s a lot to touch on.

The OVHcloud way: more than just servers

As a cloud services provider, we supply the different platforms mentioned above. But most importantly, we have to take care of the infrastructure dedicated to these services, from the buildings, power and cooling to the software stack and automation required.

And we’ve been doing just this since 2001. It all started with the opening of our first datacentre in Paris, then building our own servers the next year, and our proprietary water-cooling solution the year after that.

At the core, we are all about efficiency, automation, and sustainability:

  • Repurposing buildings as datacentres
  • Designing our own servers to optimise performance and cost
  • Maximising cooling efficiency to cut waste
  • Automating everything to reduce errors and delays

And, in all modesty…. we’re pretty good at these.

Optimising datacentres like a pro

Basically, building our own servers in our Croix (FR) and Beauharnois (CA) plants means packing a ton of servers into a square metre. We’re talking about 4 custom racks, each hosting 48 servers, all in just 3 sq.m and using up to 160kW of 12V DC power. This gives us a server density of about 5000W per sq/ft, which beats out 90% of the industry.

And on top of that, we’ve got our proprietary water-cooling system—we save energy by not using AC for our servers. To further optimise air cooling, each of our rack is equipped with a large condenser (we call it a chilled door) at the rear of the rack, dissipating regular server heat in our water system. This keeps the datacentre comfortably warm for our staff and the network equipment, and extends hardware lifespan (less maintenance, fewer replacements, fewer outages….so more savings).

In addition to the physical optimisations we’ve just mentioned is our automation system. When a server or a cluster of servers have been assembled and tested in our plant, it’s sent to the datacentre, racked and connected to power, network, and water-cooling systems by our DC staff.

And from there, everything is automated. From server power management, discovery, testing, and readiness checks, to the moment it’s selected by a customer using their Control Panel, and then configured. No human interaction is required, meaning no delay and no error.

And these operations have been optimised and refined for over 20 years.

Enter Project Gold-o-rack

So in June 2023, a small team was assembled to review, analyse and build a new version of this system. We had 3 goals:

  • Provide customers with dedicated on-premises autonomous racks
  • Offer custom-built, plug-and-play Bare Metal Pods
  • Upgrade the automation and security of our own datacentres

And that’s how Project Gold-o-rack came to be—a tribute to Goldorak (Grendizer), the legendary 70s anime mecha that crushed its enemies with style. Like its namesake, our system is powerful, autonomous, and unstoppable.

Using opensource technology was a must, as we absolutely can’t do without transparency and community support. So we went for Openstack, Netbox, Grafana, and developed our own network management and automation system, and much more.

By September 2023—just three months later—we had a fully functional 24U rack, deployable and operational in 25 minutes. That’s not just fast—that’s insanely fast.

Security was a top priority since these racks would be installed in third-party datacentres. We quickly applied for SecNumCloud qualification, leveraging our existing compliance expertise.

Then, it hit us: why not offer this as a full-fledged product? And that’s how Bare Metal Pod came to be—dedicated, secure, and fully automated.

We structured the product into three key components:

  1. On-Prem Cloud Platform (OPCP): The autonomous rack, with its own KMS and encryption mechanisms
  2. Bare Metal Pod: Built on OPCP, hosted in our datacentres, and SecNumCloud-compliant
  3. Cloud Store: A software catalogue enabling automated deployment within the rack

In June 2024, OPCP was ready, just 12 months after the 1st meeting… and shortly after we got the “green light” from the ANSSI, allowing us to pursue the SecNumCloud qualification process.

And if you were at, or watched our Summit Keynote in November 2024, you definitely saw it live…

BM POD Summit 2024

What’s under the hood?

As an autonomous rack, it contains:

  • Power Distribution Units
  • Network equipment for internal and external connectivity
  • Servers, including a Pod Controller

There are 9 Bare Metal server models available, from 16 to 256 cores, from 128 GB to 2.5 TB of memory, up to 792 TB NVMe SSD (RAW), Nvidia L4 and L40s GPU depending on your needs.

And the best part is that you can mix and match them, to build and manage the perfect autonomous rack, while keeping full control on security and resources.

We’ve got a total of 607 models in Bare Metal Pod, enough for nearly any configuration and need. And with up to 1500 servers in a single Pod, the possibilities are endless.

And on top of these servers, we are building an automated software library: the cloud store. Enclosed in the Bare Metal Pod, the cloud store will offer the Pod admin a selection of OS, virtualisation platforms and various software that can be pushed, installed, configured automatically on the servers in the Pod. This includes built-in security, monitoring, and logging integrated in the Pod monitoring tools.

And herein1 lies the main challenge: making sure an entire collection of software from various editors can cohabit and interact with a single, opensource monitoring platform, a KMS, and an IAM without breaking anything…

Coming up next…

That’s a wrap for now! In the next article, we’ll deep-dive into hardware, networking, and security. Stay tuned!

Some of the Bare Metal servers options:

  • Scale A1 – A8: Equipped with 4th Gen Intel Xeon Gold or AMD EPYC 9004 series processors, these servers provide between 16 to 256 cores and 128 GB to 1 TB of DDR5 ECC RAM. They are suitable for:
    • Hosting SaaS and PaaS solutions
    • Virtualisation
    • Database hosting
    • Containerisation and orchestration
    • Confidential computing
    • High-performance computing
  • Scale-GPU 1 – 3: Featuring NVIDIA L4 GPU cards (x2 or x4) and up to 1.2 TB of DDR5 ECC RAM, these servers are ideal for:
    • 3D modelling
    • Media streaming
    • Virtual Desktop Infrastructure (VDI)
    • Data inference
  • HGR-HCI I1 – I4: With dual 5th Gen Intel Xeon Gold or 4th Gen AMD EPYC 9004 series processors, these servers provide between 16 to 72 cores and up to 2.5 TB of DDR5 ECC RAM. They are suitable for:
    • Hyperconverged infrastructure
    • Virtualisation
    • Database hosting
    • Containerisation and orchestration
    • Confidential computing
    • High-performance computing
  • HGR-SDS 1 – 2: Equipped with dual 5th Gen Intel Xeon Gold processors, these servers offer between 16 to 48 cores and up to 1.5 TB of DDR5 ECC RAM. They are ideal for:
    • Software-defined storage solutions
    • Object storage solutions
    • Big data
    • Database hosting
  • HGR-STOR 1 – 2: Featuring a 5th Gen Intel Xeon Gold processor with 36 cores and up to 512 GB of DDR5 ECC RAM, these servers are designed for:
    • Archiving
    • Database hosting
    • Backup and disaster recovery plans
  • HGR-AI-2: Equipped with NVIDIA L40s GPU cards (x2 or x4) and up to 2.3 TB of DDR5 ECC RAM, these servers are optimized for:
    • Machine learning
    • Deep learning

(And many other options… you get the idea.)

  1. My editor liked the word and I found it cool too. https://www.collinsdictionary.com/dictionary/english/herein ↩︎
+ posts

Technical Marketing Specialist @OVHcloud

About 20 years of experience in HW storage/backup/replication.....