200 Deployment

Posted Almost 7 years ago. Visible to the public.

New Deployments

Looking for a checklist for new deployments? You will find it here: Adding new deployments.

Servers

As described in 100 Infrastructure basics we always have two independent servers running your application which is why your deployment process has to accommodate two deployment targets.

If you are a opscomplete customer you have your own servers so the server hostnames could look like this:

app01-prod.acme.makandra.de
app02-prod.acme.makandra.de

User

For the deployment and environment for your services we create users for you. Depending on the circumstances they get named after your project or your company name.

If you have only one deployment we usually would name the user for the staging deployment deploy-acme_s and for production it would be deploy-acme_p.

We allow user logins via SSH Public Key Authentication Show archive.org snapshot only. No passwords. Please use ssh-keygen Show archive.org snapshot to generate a key pair and send us the public key.

Good to know

Since you can log into your deployment server via SSH we also made it easier to reach your deployment directory by typing appdir.

deploy-acme_s@app01-prod.acme.makandra.de:~$ appdir
deploy-acme_s@app01-prod.acme.makandra.de:/var/www/www.acme.com$

Directories

Your application will live in /var/www/www.acme.com on the application servers with a couple of other important directories. The directories you find in your deployment dirs are based on the Capistrano Directory Structure Show archive.org snapshot .
The following shows a freshly deployed directory structure:

root@server:/var/www/www.acme.com# tree -d -L 2
.
├── current -> /var/www/www.acme.com/releases/19700101000000
├── log
├── releases
│   └── 19700101000000
└── shared
    ├── config
    │   ├── database.yml
    │   └── secrets.yml
    ├── log
    ├── pids
    ├── public
    │   └── system -> /gluster/shared/www.acme.com/shared/public/system
    ├── storage -> /gluster/shared/www.acme.com/shared/storage
    └── system -> /gluster/shared/www.acme.com/shared/system

The current directory is a symlink that always points to the last successful deployment done by capistrano Show archive.org snapshot . In 201 Capistrano 3 for your makandra Deployment you will find a guide on how to setup up Capistrano.

In the next directory releases there are previous deployments you can roll back to in case something goes wrong. You can read more about those directories here Show archive.org snapshot .

In the directory shared there are three more directories (config, storage and system) which we can put on glusterfs Show archive.org snapshot a filesystem that is shared between the application servers if this is something your application benefits from.

We also create a config/database.yml and an initial config/secrets.yml for you.

The config/database.yml is managed by us and all manual changes are going to be overwritten by our configuration management.

The config/secrets.yml will get created once for Ruby on Rails projects with a generated secret_key_base entry but is not managed by us. You are free can change it to meet your needs or ask us if you do want us to manage the contents.

Synced directories

Check carefully which directories are symlinks pointing to glusterfs (/gluster/shared/). Only these directories are shared among the servers.

Databases

If your application uses a database (MySQL or PostgreSQL, others upon request) we automatically deploy a database.yml which includes host and credentials to access it.

We proxy the database port to the application servers. So your application does not have to be aware where the database server resides/what the hostname of the database is or even which database instance is currently active. You can connect to a single port on localhost and the proxy will take care of fail-overs.

Alternative/additional databases

It is possible to have multiple Redis instances configured as well.

Loadbalancing

Requests get to your application via our load balancers. The default method we use is called least_conn Show archive.org snapshot which means a client accessing your application is always directed to the application server that has the least open connections providing great load balancing.
Previously our default was ip_hash Show archive.org snapshot which created a hash of the client IP and always directed the same client to the same application server. This ensured - if for example your application has sessions that are not synchronized between application servers - that the user always hits the server their active session exists on.

Please contact us in this case or if you have questions regarding this topic!

Virtual Host Configuration

We are using our opinionated configuration based on experience and best practices, which is why we handle requests to /assets as well as /packs differently in regard to caching. We invoke Cache-Control "public" with an expiry time of one year. The same goes for static files which end with numeric characters. As with most everything you can request this to be changed to a configuration suitable for your deployment.

Thomas Eisenbarth
Last edit
About 2 years ago
Stefan Langenmaier
License
Source code in this card is licensed under the MIT License.
Posted by Thomas Eisenbarth to opscomplete (2017-03-31 11:31)