Web hosting — why we decided to migrate three million websites

Have you migrated a website before? If you have, or will need to migrate websites regularly, you’ll be familiar with the difficulties associated with this kind of operation.

To put in the most basic terms, this operation usually involves six steps:

  • purchasing and configuring the destination infrastructure
  • testing the new infrastructure by importing data and website code
  • shutting down the website, or putting it in read-only mode to stop the data from being saved on the old infrastructure during the migration process
  • deploying code on the new infrastructure, and importing data into the new databases
  • modifying the source code to adapt it to the new infrastructure (database identifiers, connection to external APIs, etc.)
  • once the website is working on the new infrastructure, redirecting traffic to make it available again (modifying DNS zones, updating the load balancer’s back-end, etc.)

Depending on the website’s source code and how complex the infrastructure is, these steps may vary in complexity. Really, this depends on your website’s write access to data. Should data be stored in a database? Does your website have a cache based on local files? All of this is defined in the source code for your webpages.

Web hosting: migrating 3 million websites

Just a year ago, in March 2018, OVH launched a major project: the migration of all web hosting and email clients hosted in its legacy Paris datacentre to a new datacentre. To organise the project, the migration process has been split into two parts, managed by two separate teams: web hosting for the first team, and email for the second. Today, we’ll be focusing on web hosting migration, but the email migration team will also discuss their process on our technical blog.

For web hosting, this project involves migrating three million different websites, hosted on 7,000 servers in Paris. Some of these websites have been running since 1999! It is one of OVH’s longest-running activities. So why migrate them when they’ve been working fine in Paris for nearly 20 years? To understand all of the challenges we face, we’ll need to delve into the history of this service.

A brief history of our web hosting platform

When Octave founded OVH in 1999, internet access was still limited in availability. The company’s first activity was website hosting. What seems simple now was not as simple at the time. You had to have good network connections, keep web servers running, and configure them properly. It was hard to find people with the technical knowledge or resources to do this.

P19 construction and expansion

In the early 2000’s, OVH had the opportunity to acquire a building in the 19th arrondissement of Paris. The P19 building had good access to electricity and internet networks, so it could provide web and email hosting services to a high number of customers. For a while, it was OVH’s only datacentre.

In P19, OVH didn’t just offer web hosting. The datacentre also hosted dedicated servers. Both activities quickly gained popularity, and in the late 2000’s, OVH began building many new datacentres in Roubaix, then Strasbourg, Beauharnois (Canada), Gravelines, and further afield.

Every time we built a new datacentre, we gained more experience, which helped us improve logistics and maintenance. These new datacentres were much larger than our site in Paris, and gave us the space we needed to accelerate the development of many innovative technologies and solutions, like water-cooling, our own production line for servers and racks, and a cooling system for server rooms that didn’t involve air-conditioning.

How the web has developed between 1999 and now

The internet has changed dramatically since 1999. From our point of view as a hosting provider, we have observed three developments over time…

  • 1999 -> 2005: The birth of the web. Static websites were being set up in HTML. This was when blogs started to emerge. But this technology was only available to people who knew how to use HTML and FTP clients, even though FrontPage helped a lot of people get started.
    To work, these websites included data directly in the code. Web hosting was quite simple: the user needed a storage space and a web server, the sole purpose of which was to send the webpage, which it would search for in the storage space.
  • 2006 -> 2013: Web 2.0 — the social network and database revolution. Websites became dynamic, and could display custom pages, depending on the user. This was when the discussion forums, blog platforms, and social networks that are still so popular today first began to emerge.
    Dynamic websites were a revolution for web hosting providers; code and data were now stored in two separate locations. This meant that the page would need to be generated before it was sent to the end-user. The role of the web server changed, and would generate these pages on request, mainly with PHP language. Database servers needed to be added for these websites, as well as computing power for the web servers.
  • 2014 -> today: JavaScript has increased in power, helping developers build complex web applications in browsers, and significantly improving web users’ experience. This change has been made possible by the deployment of the internet on our smartphones. A large number of services that require web access could be launched as a result.
    Technically, this means that uses are changing, and users visit websites more often, thereby increasing the volume of data created and the complexity of how it is processed. The use of disk space and resources to generate web pages is continuously increasing.

We have very quickly adapted our solutions to respond to these changes. We have offered new databases, increased our storage space, provided CDN services, and much more.

But the rapid growth in the number of users and resources for our web hosting plans filled up our datacentre. Due to both the natural growth of the service and the growing needs of the websites we host, in 2015 we noticed that our Paris datacentre would be full by early 2017.

Web hosting deployment in Gravelines

Once we noted this, there was only one solution: to avoid a shortage of web hosting plans, we need to host our websites in another datacentre. We industrialised our services in Paris to deliver hosting services 24/7, manage 7,000 servers, and keep them operational, based on OVH’s earliest technologies.

We could have chosen to maintain this industrialisation and apply it to Gravelines, but we decided to do something else. We decided to build a new technical architecture that would support growing needs in terms of performance, and above all, allow us to re-use other OVH products. It’s the famous “eat your own dog food” approach, applied and expanded to the scale of our web hosting plans.

We challenged our very own teams to manage the dedicated servers, vRack (private network), IP addresses, and load balancers (IPLB), so that they would be able to maintain our customers’ infrastructures and traffic. By becoming one of our own biggest customers, we were able to identify and overcome a lot of limitations — improving the response speed of our APIs, optimising our databases, and much more.

To minimise latency and meet geographic distribution requirements, we offer our customers a wide range of datacentres around the world. All these datacentres were potential targets for the growth of our platform. For logistical reasons, we chose to launch a single new datacentre in Europe. And this doesn’t have an impact on our websites: the differences in latency between our datacentres are so minimal that they don’t even seem like they are hosted websites (the increase is around just a few milliseconds, and it takes a few hundred milliseconds to generate webpages).

To choose our new datacentre, we analysed our natural growth to work out our infrastructure requirements. In fact, our infrastructure grows every week with new hardware deliveries, and we were at risk of filling up our datacentres so quickly that it would prevent our customers from renting dedicated servers and other OVH services. According to these criteria, only two datacentres met our needs in terms of infrastructure in 2016: Gravelines in Northern France, and Beauharnois in Canada. Since our platform is currently only deployed in Europe, we’ve started working on Gravelines.

At the same time, we reviewed and optimised the hardware used to build our clusters, so that we could deliver higher performance. The innovations introduced in Gravelines have helped us further improve our platform’s availability.

The biggest challenge was not changing the service experience — we’ve kept all the features, and kept all of the same graphical interfaces and APIs. Our goal was simply to renew the infrastructure, not the commercial products themselves.

This datacentre for web hosting was opened in July 2016. And since November that same year, all of our new hosting plans have been delivered there.

Every year, customers cancel their web hosting services because they no longer use them, or they’re migrating to other products, such as VPS solutions. As a result of this, the number of websites hosted in Paris has decreased gradually over the past three years. This helped us handle the increase in power required for the remaining websites, without increasing the capacity of our infrastructure in Paris.

Given the natural decline in the number of websites hosted at the datacentre, we decided it would better to wait for most of our customers to cancel their services before we migrated them. Why do this when there are three million websites left?

Why did we choose to migrate our datacentre?

To give Paris a new lease of life

There are several reasons why we’re starting this monumental undertaking. But the main reason is managing obsolescence.

Our infrastructure is based on physical resources housed in this datacentre: dedicated and virtualised servers (which are based on physical machines), network elements, and a cooling circuit (water-cooling and air conditioning). And in order for the infrastructure to remain available 24/7, we need to renew this hardware periodically.

For dedicated servers, it’s quite simple. If a server becomes faulty, it can simply be replaced with a new one. The servers built for Paris don’t have the same technical and logistical improvements that our other datacentres benefit from, and are becoming increasingly difficult to assemble, as our datacentre’s obsolescence requires us to renew things more and more.

We have considered replacing these servers with next-generation models, but we would need to modify the architecture of entire rooms. And in order to achieve this without any impact on our platform’s availability, we would need space to build new rooms before migrating the servers one by one. In a building that is almost at full capacity, this would require emptying rooms.

Dedicated servers also need power, a network, and a cooling system to work. All of these elements are also managed by physical hardware: air conditioning and water-cooling to cool down the machines, routers and switches for the network, electrical transformers, UPS devices, and batteries for the electricity.

These redundant physical infrastructures must also be replaced on a regular basis to avoid any downtime. If one access path fails, the second will take over. Moreover, this is a common operation performed by technicians, so that they can carry out minor maintenance tasks on hardware components.

The process of fully replacing these infrastructures with new ones is long and complex. Relying on a single access path for such a long time period was just not an option. Replacing them would have required setting up a third path, and then switching over to it when everything was ready. However, this would also mean that there would need to be space in the datacentre for all these operations.

After twenty years, our Paris datacentre has reached the end of a lifecycle. Large-scale work is required at all levels, and this requires space. This is the main reason behind the migration.

To increase website performance

With the new infrastructure in Gravelines, we are able to provide increased performance for our customers’ websites. Moreover, these new technologies have helped us deploy some additional features that we can’t deploy in Paris without renewing our infrastructures: HTTP/2, MySQL 5.6, and more.

Our customers can migrate their projects themselves, but web hosting plan migration is a tricky, delicate procedure. Many customers gave up on it.

Once the migration is complete, we will also be able to simplify our operational maintenance, using OVH standards exclusively. This will help us avoid carrying out specific operations in the Paris datacentre, reducing maintenance time and the risks of certain recurring manual operations.

How are we migrating so many websites?

As a web hosting provider, we mainly specialise in two areas — hosting data, and executing code.

Data hosting is a complex operation if it is to maintain its integrity over time, but it’s a relatively standardised industry. Data is stored on a file system (this is a standard) or in a database that uses a specific query language (MySQL 5.5 or MySQL 5.6). So we simply need to reproduce an architecture that meets the same standard on the destination infrastructure, and migrate data to it.

Code execution is more complex. It’s very difficult to infer a source code’s behaviour in its environment without at least interpreting it. It can return an error on a specific version of a PHP extension, check for a local folder, and much more. For example, a lot of our customers store the absolute path to their files in the code. However, this means that we cannot change the storage location of the files without affecting their service.

When you host services for a few customers, you can easily help them by detecting codes that will no longer work after migration. But on a larger scale, this is difficult. Imagine doing this with the codes for more than three million websites!

Asking our customers to change their source code was not a viable solution. Even assuming that all of our customers would read an email about this, some of them would not make the changes due to a lack of time, technical knowledge, or simply forgetting to do so. We would be causing problems for our customers with their websites, and for us, this was just not an option.

For nearly a year, we developed several migration scenarios. We extrapolated them in all directions, with two objectives:

  • the websites must remain functional after the migration is complete, without customers needing to make any changes to their code
  • the impact on service availability during the migration process must be minimal

We implemented and tested some of these scenarios between March and June 2018. This initial work helped us choose the best migration plan. To complete this migration without affecting the services, we needed to adapt part of our infrastructure and information system: creating an inter-datacentre network tunnel, changing load balancers, our database platform, adding SQL proxy servers, and more.

We thoroughly tested the chosen scenario. To ensure that this procedure would be carried out with minimal impact, we repeated it many times under real conditions on internal clusters, using data sets based on production platforms.

Want to know more details about our migration plan? We can’t include everything in this blog post, so we’ve decided to talk about our migration in a series of posts, since the topic is so vast.

Stay tuned for our upcoming posts, which will give you a behind-the-scenes view of the largest-scale website migration carried out by Europe’s biggest hosting provider!

+ posts

Engineering manager on webhosting