Looking for a checklist for new deployments? You will find it here: Adding new deployments.
As described in 100 Infrastructure basics we always have two independent servers running your application which is why your deployment process has to accommodate two deployment targets.
If you are a opscomplete customer you have your own servers so the server hostnames could look like this:
For the deployment and environment for your services we create users for you. Depending on the circumstances they get named after your project or your company name.
If you have only one deployment we usually would name the user for the staging deployment
deploy-acme_s and for production it would be
We allow user logins viaonly. No passwords. Please use to generate a key pair and send us the public key.
Since you can log into your deployment server via SSH we also made it easier to reach your deployment directory by typing
firstname.lastname@example.org:~$ appdir email@example.com:/var/www/www.acme.com$
Your application will live in
/var/www/www.acme.com on the application servers with a couple of other important directories. The directories you find in your deployment dirs are based on the
The following shows a freshly deployed directory structure:
root@server:/var/www/www.acme.com# tree -d -L 2 . ├── current -> /var/www/www.acme.com/releases/19700101000000 ├── log ├── releases │ └── 19700101000000 └── shared ├── config │ ├── database.yml │ └── secrets.yml ├── log ├── pids ├── public │ └── system -> /gluster/shared/www.acme.com/shared/public/system ├── storage -> /gluster/shared/www.acme.com/shared/storage └── system -> /gluster/shared/www.acme.com/shared/system
current directory is a symlink that always points to the last successful deployment done by
. In 201 Capistrano 3 for your makandra Deployment you will find a guide on how to setup up Capistrano.
In the next directory
releases there are previous deployments you can roll back to in case something goes wrong. You can read more
In the directory
shared there are three more directories (
system) which we can put on
a filesystem that is shared between the application servers if this is something your application benefits from.
We also create a
config/database.yml and an initial
config/secrets.yml for you.
config/database.yml is managed by us and all manual changes are going to be overwritten by our configuration management.
config/secrets.yml will get created once for Ruby on Rails projects with a generated
secret_key_base entry but is not managed by us. You are free can change it to meet your needs or ask us if you do want us to manage the contents.
Check carefully which directories are symlinks pointing to glusterfs (
/gluster/shared/). Only these directories are shared among the servers.
If your application uses a database (MySQL or PostgreSQL, others upon request) we automatically deploy a
database.yml which includes host and credentials to access it.
It is possible to have multiple Redis instances configured as well.
Requests get to your application via our load balancers. The default method we use is called
which means a client accessing your application is always directed to the application server that has the least open connections providing great load balancing.
Previously our default was which created a hash of the client IP and always directed the same client to the same application server. This ensured - if for example your application has sessions that are not synchronized between application servers - that the user always hits the server their active session exists on.
Please contact us in this case or if you have questions regarding this topic!
We are using our opinionated configuration based on experience and best practices, which is why we handle requests to
/assets as well as
/packs differently in regard to caching. We invoke
Cache-Control "public" with an expiry time of one year. The same goes for static files which end with numeric characters. As with most everything you can request this to be changed to a configuration suitable for your deployment.