Deployment
New Deployments
Looking for a checklist for new deployments? You will find it here -> Adding new deployments
Servers
As you should already know from 100 Infrastructure basics we always have two independent servers running your application which is the reason your deployment process has to accommodate two deployment targets.
If you are a opscomplete customer you have your own servers so the server hostnames could look like this:
Copyapp01-prod.acme.makandra.de app02-prod.acme.makandra.de
User
For the deployment and running of other services we create users on our servers for you. Depending on the circumstances they get named after your project or your companies name.
To give you an idea how that looks like:
If you have only one deployment we usually would name the user for the staging deployment deploy-acme_s
and for production it would be deploy-acme_p
.
We allow user logins via SSH Public Key Authentication (https://www.ssh.com/ssh/public-key-authentication) only. Please use ssh-keygen to generate a keypair and send us the public key.
Good to know
Since you can log into your deployment server via SSH we also made it easier to reach your deployment directory by typing appdir
.
Copydeploy-acme_s@c23.makandra-3.makandra.de:~$ appdir deploy-acme_s@c23.makandra-3.makandra.de:/var/www/www.acme.com$
Paths
Your application will live in /var/www/www.acme.com
on the application servers with a couple of other important directories. The directories you find in your deployment dirs are based on the Capistrano Directory Structure.
The following shows a freshly deployed directory structure:
Copyroot@server:/var/www/www.acme.com# tree -d -L 2 . ├── current -> /var/www/www.acme.com/releases/19700101000000 ├── log ├── releases │ └── 19700101000000 └── shared ├── config │ ├── database.yml │ └── secrets.yml ├── log ├── pids ├── public │ └── system -> /gluster/shared/www.acme.com/shared/public/system ├── storage -> /gluster/shared/www.acme.com/shared/storage └── system -> /gluster/shared/www.acme.com/shared/system
The current
directory is a symlink that always points to the last successful deployment done by capistrano
. In 201 Capistrano 3 for your makandra Deployment you will find a guide on how to setup up Capistrano.
In the next directory releases
there are previous deployments you can roll back to in case something goes wrong. You can read more about those directories here.
In the directory shared
there are three more directories (config
, storage
and system
) which we can put on glusterfs
a filesystem that is shared between the application servers if this is something your application benefits from.
We also create a config/database.yml
and config/secrets.yml
for you.
The config/database.yml
is managed by us and all changes will get overwritten.
The config/secrets.yml
will get created for Ruby on Rails projects with a generated secret_key_base
entry and is not managed by us. So you can change it to meet your needs.
Synced directories
Check carefully which directories are symlinks pointing to glusterfs (/gluster/shared/
). Only these directories are shared among the servers.
Databases
If your application uses a database (MySQL or PostgreSQL, others upon request) we automatically deploy a database.yml
which includes host and credentials to access it.
Optionally it is possible to have multiple Redis instances configured as well.
Loadbalancing
Requests get to your application via our load balancers. The default method we use is called IP hash
which means a client accessing your application is always directed the same application server. This ensures - if for example your application has sessions that are not synchronized between application servers - that the user always hits the server his active session exists on.
If you are not limited by this constraint we can use a round robin
method as well. Please contact us in this case or if you have questions regarding this topic!
Virtual Host Configuration
We are using our opinionated configuration based on experience and best practices, which is why we handle requests to /assets
as well as /packs
differently in regard to caching. We invoke Cache-Control "public"
with an expiry time of one year. The same goes for static files which end with numeric characters. As with most everything you can request this to be changed to a configuration suitable for your deployment.