Learn / Blog / Article

Back to blog

A 5-minute guide to cloud hosting for startups and web designers

Cloud hosting is used by almost everyone today, ranging from the newest startup companies to some of the biggest players in the industry (Pinterest, Airbnb and Netflix all run on publicly available clouds). Before the advent of cloud hosting, you had to buy or rent your servers and manage them yourself. This is still a viable approach if you have very specific needs, but for most use cases the cloud works remarkably well...however not without a couple of caveats.

Last updated

18 Aug 2022

Reading time

4 min

Share

Multitenancy, virtualization and over-provisioning

What you need to know before you start using Cloud Hosting

Multitenancy is a way of saying that you're not the only one using a piece of hardware. Cloud hosting providers use virtualization to achieve multitenancy and become more cost-efficient. Your virtual cloud server (hereafter referred to as a guest) is given a slice of the available resources (memory, CPU, etc). Since most guests won't make use of all their resources all the time it's common to do what's called over-provisioning - the allocation of more resources than available.

Let's assume we have a server with 8 CPU cores and 16 guests where each one is allocated a full CPU core. Most of the time this will be fine, but if all guests decide to max out their CPU at the same time they will only get access to 0.5 cores each. A worst case scenario is having another guest causing a lot of disk IO (for example through swapping) which might make your guest grind to a halt.

The hosting providers don't generally tell you how much they over-provision. It helps to be aware that over-provisioning exists and that the activity of others might affect your servers. Always have more capacity available than you need in case someone starts using a lot of the shared resources.

If you don't want to deal with multitenancy the only option is to go for dedicated servers. An interesting initiative from Rackspace is called OnMetal [1]. It aims to give you the flexibility of cloud servers while using dedicated hardware. There's also whats called Hybrid Clouds [2] which allow you to have both cloud and dedicated servers in the same network.

Redundancy and load balancing

Dealing with increasing traffic

If you're starting out small and some downtime is acceptable it might be enough to have a single server.

A load balancer is definitely something you want if you’re serious about having high availability...

As soon as you start growing you might want to add another server to gain some redundancy. I would recommend to keep the load on each server below 1 - (1 / number of servers). In case of two servers that would mean below 50% (1 - (1 / 2)), and in case of three servers below 67% (1 - (1 / 3)). The reason for this is that you should always be prepared that at least one of your servers becomes unavailable, and the others need to be able to handle the increased load.

These servers should be placed behind a load balancer so that traffic can be distributed evenly between them. A load balancer is definitely something you want if you're serious about having high availability and don't want to set up the load balancer yourself [3].

You'll also need to build your application to be able to run on multiple servers. You can't for example keep session information stored locally on each server, simply because it's impossible to tell which server will end up handling the request. An option is to "lock" each client to a specific server using a cookie, however this is not good because there's still the risk of that particular server becoming unavailable.

Managed servers and databases

Easier administration with managed servers/databases

One of the main reasons for choosing cloud servers is to make administration easier and your infrastructure more flexible. This is why offerings like Amazon RDS [4] (managed databases) are so interesting. Redundancy is not just for your application servers, but for your database as well. It's far from trivial to set up a correctly replicating PostgreSQL cluster with fail over in case you haven't done so before.

I would look at these supplementary services as a great way to save time and effort, especially for developers not having the expertise of running larger infrastructures. My only word of caution would be to avoid vendor lock in as much as possible. Amazon RDS for example uses standard technologies (MySQL, PostgreSQL, Oracle) and if you ever want to move to another provider that can be done with relative ease. Amazon DynamoDB [5] on the other hand is proprietary software, so if you end up using that it becomes much harder to move away from using Amazon. The community around such a niche product is also usually much smaller.

Cloud hosting costs

The bottom line - how much will this cost me?

Cost is always a key question. Cloud services tend be more expensive than having your own dedicated servers [6]. What they add in cost they make up for with ease of use and flexibility. My advice would be to start out using the smaller instances and then scale up as required. Hopefully you'll have a good revenue stream coming in by the time you need those really powerful servers.

References

[1] Rackspace OnMetal - http://www.rackspace.com/cloud/servers/onmetal/

[2] Rackspace Hybrid Cloud -http://www.rackspace.com/cloud/hybrid/rackconnect/

[3] HAProxy - http://www.haproxy.org/

[4] Amazon RDS - https://aws.amazon.com/rds/

[5] Amazon DynamoDB - https://aws.amazon.com/dynamodb/

[6] Affordable dedicated servers - https://www.hetzner.de/en/hosting/produktmatrix/rootserver