.. _usage: Usage ===== Majic Ansible Roles are targeted at sysadmins who wish to deploy services for their own, small-scale use. This chapter gives a simple tutorial-like set of instructions for using all of the roles available. .. contents:: :local: Overview -------- There is a number of different roles that can prove useful for setting-up a small infrastructure of your own. Some roles are suited for one-off operations during installation, like the ``preseed`` and ``bootstrap``, while some are better suited for periodic runs for maintaining the users and integrity of the system. By the end of following the instructions, you will have the following: * Ansible server, used as controller for configuring and managing the remaining servers. * Communications server, providing the LDAP, mail, and XMPP services. * Web server, providing the web services. * Backup server, used for storing all of the backups. .. warning:: Majic Ansible Roles support *only* Python 3 - both on the controller side and on the managed servers side. It is important to make sure that both the controller Python virtual environment used for Ansible *and* the interpreter for remote servers are *both* set-up to use Python 3. Python 3 is specified explicitly during virtual environment creation and in ``ansible.cfg`` configuration file (``interpreter_python`` option under ``defaults`` section). Pre-requisites -------------- For the set-up outlined in this usage guide you'll need the following: * One server where Ansible will be installed at. Debian Bullseye will be installed on top of this server. The server will be set-up manually (this is currently out of scope for the *Majic Ansible Roles* automated set-up). * Three servers where the services will be set-up. All servers must be able to communicate over network with each-other, the Ansible servers, and with Internet. Debian Bullseye will be installed on top of this server as part of the usage instructions. * Debian Bullseye network install CD. * All servers should be on the same network. * IP addresses for all servers should be known. * Netmask for all servers should be known. * Gateway for all servers should be known. In case of the servers listed above, it might be safest to have them as VMs - this is cheapest thing to do, and simplest (who wants to deal with pesky hardware anyway?). Usage instructions assume the following: * Domain used for all servers is ``example.com``. If you wish to use a different domain, adjust the instructions accordingly. * Server hostnames are ``ansible``, ``comms``, ``www``, and ``bak`` (for Ansible server, communications server, web server, and backup server, respectively). Installing the OS on Ansible server ----------------------------------- Start-off by installing the operating system on the Ansible server: 1. Fire-up the ``ansible`` server, and boot from the network installation CD. 2. Select the **Install** option. 3. Pick **English** as language. 4. Pick the country you are living in (or whatever else you want). 5. Pick the **en_US.UTF-8** locale. 6. Pick the **American English** keymap. 7. Configure the network if necessary. 8. Set the hostname to ``ansible``. 9. Set the domain to ``example.com``. 10. Set the root password. 11. Create a new user. For simplicity, call the user **Ansible user**, with username **ansible**. 12. Set-up partitioning in any way you want. You can go for **Guided - use entire disk** if you want to keep it simple and are just testing things. 13. Wait until the base system has been installed. 14. Pick whatever Debian archive mirror is closest to you. 15. If you have an HTTP proxy, provide its URL. 16. Pick if you want to participate in package survey or not. 17. Make sure that at least the **standard system utilities** and **SSH server** options are selected on task selection screen. 18. Wait for packages to be installed. 19. Install the GRUB boot loader on MBR. 20. Finalise the server install, and remove the installation media from server. Installing required packages ---------------------------- With the operating system installed, it is necessary to install a couple of packages, and to prepare the environment a bit on the Ansible server: 1. Install the necessary system packages (using the ``root`` account):: apt-get install -y virtualenv virtualenvwrapper git python3-pip python3-dev libffi-dev libssl-dev 2. Set-up loading of ``virtualenvwrapper`` via Bash completions (using the ``root`` account):: ln -s /usr/share/bash-completion/completions/virtualenvwrapper /etc/bash_completion.d/virtualenvwrapper 3. Set-up the virtual environment (using the ``ansible`` account): .. warning:: If you are already logged-in as user ``ansible`` in the server, you will need to log-out and log-in again in order to be able to use ``virtualenvwrapper`` commands! :: mkdir ~/mysite/ mkvirtualenv -p /usr/bin/python3 -a ~/mysite/ mysite pip install -U pip setuptools pip install 'ansible~=2.9.0' dnspython .. warning:: The ``dnspython`` package is important since it is used internally via ``dig`` lookup plugin. Cloning the *Majic Ansible Roles* --------------------------------- With most of the software pieces in place, the only missing thing is the Majic Ansible Roles: 1. Clone the git repository:: git clone https://code.majic.rs/majic-ansible-roles ~/majic-ansible-roles 2. Checkout the correct version of the roles:: cd ~/majic-ansible-roles/ git checkout -b 7.0-dev 7.0-dev Preparing the basic site configuration -------------------------------------- Phew... Now that was a bit tedious and boring... But at least you are now ready to set-up your own site :) First of all, let's set-up some basic directory structure and configuration: 1. Create Ansible configuration file. .. warning:: Since Ansible 2.x has introduced much stricter controls over security of deployed Python scripts, it is recommended (as in this example) to use the ``pipelining`` option (which should also improve performance). This is in particular necessary in cases where the SSH user connecting to remote machine is *not* ``root``, but there are tasks that use ``become`` with non-root ``become_user`` (which is the case in Majic Ansible Roles). See `official documentation `_ and other alternatives to this. :file:`~/mysite/ansible.cfg` :: [defaults] roles_path=/home/ansible/majic-ansible-roles/roles:/home/ansible/mysite/roles force_handlers = True inventory = /home/ansible/mysite/hosts interpreter_python = /usr/bin/python3 [ssh_connection] pipelining = True 2. Create directory where retry files will be stored at (so they woudln't pollute your home directory):: mkdir ~/mysite/retry 3. Create the inventory file. :file:`~/mysite/hosts` :: [preseed] localhost ansible_connection=local [communications] comms.example.com [web] www.example.com [backup] bak.example.com 4. Create a number of directories for storing playbooks, group variables, SSH keys, X.509 artefacts (for TLS), and GnuPG keyring (we'll get to this later):: mkdir ~/mysite/playbooks/ mkdir ~/mysite/group_vars/ mkdir ~/mysite/ssh/ mkdir ~/mysite/tls/ mkdir ~/mysite/gnupg/ 5. Create SSH private/public key pair that will be used by Ansible for connecting to destination servers, as well as for some roles:: ssh-keygen -f ~/.ssh/id_rsa -N '' Protecting communications using TLS ----------------------------------- In order to protect the communications between users and servers, as well as between servers themselves, it is important to set-up and properly configure TLS for each role. *Majic Ansible Roles* mandates use of TLS wherever possible. In other words, *you must* have TLS private keys and certificates issued by some CA for all servers in order to be able to use most of the roles. The private keys and certificates are primarily meant to be generated *per service*, and that is the approach we will pursue here as well. TLS private keys should be ideally generated locally and kept in a safe environment (possibly encrypted until needed), while the X.509 certificates should be issued by a relevant certification authority. You can choose to roll-out your own CA, use one of the public CAs, or perhaps go for a mix of both. For the purpose of this guide, we'll set-up a small simple local CA to issue all the necessary certificates, and we'll generate the private keys and issue server certificates on the go as needed, storing them all under the ``~/mysite/tls/`` directory. So, let us make a slight detour to create a CA of our own: 1. First off, install a couple more tools on the Ansible server. We will be using ``certtool`` for our improvised CA needs (run this as ``root``):: apt-get install -y gnutls-bin 2. Create a template for the ``certtool`` so it would know what extensions and content to have in the CA certificate: :file:`~/mysite/tls/ca.cfg` :: organization = "Example Inc." country = "SE" cn = "Example Inc. Test Site CA" expiration_days = 1825 ca cert_signing_key crl_signing_key 3. Almost there... Now let us generate the CA private key and self-signed certificate:: certtool --sec-param high --generate-privkey --outfile ~/mysite/tls/ca.key certtool --template ~/mysite/tls/ca.cfg --generate-self-signed --load-privkey ~/mysite/tls/ca.key --outfile ~/mysite/tls/ca.pem 4. And just one more small tweak - we need to provide a truststore PEM file containing all CA certificates in the chain for services to be able to connect to each-other (where necessary). In this particular case we have a super-simple hierarchy (root CA is also issuing the end entity certificates), so simply make a copy of the ``ca.pem``:: cp ~/mysite/tls/ca.pem ~/mysite/tls/truststore.pem .. note:: A useful feature that all roles implement is a check to see if certificates will expire within the next 30 days. This check is performed via cronjob at midnight, and failing results will end-up being delivered to the ``root`` user on local server. Later on, once you have configured the mail server, you should be able to set-up the necessary aliases to have the mails delivered to non-local accounts too. Preseed files ------------- The ``preseed`` role is useful for generating Debian preseed files. Preseed files can be used for automating the Debian installation process. Preseed files are created on the Ansible controller, and then supplied to Debian installer. So, let's set this up for start: 1. First of all, create the playbook for generating the preseed files locally. :file:`~/mysite/playbooks/preseed.yml` :: --- - hosts: preseed roles: - preseed 2. Now we need to configure the role. Two parameters are mandatory - one that specifies where the preseed files are to be stored, and one that specifies the public key that should be used to pre-populate the SSH authorized keys for the ``root`` account. This is required for the initial bootstrap of servers because Debian GNU/Linux does not by default allow the ``root`` user to authenticate via SSH using a password. We will use the SSH public key generated earlier via the ``ssh-keygen`` command. Create the configuration file: :file:`~/mysite/group_vars/preseed.yml` :: --- # Public key used to authenticate remote logins via SSH for the # root account. ansible_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}" # Directory where the preseed files will be output to. preseed_directory: "~/mysite/preseed_files/" 3. Now we can generate the pressed files:: workon mysite && ansible-playbook playbooks/preseed.yml 4. If all went well, you should have the following files created: * :file:`~/mysite/preseed_files/comms.example.com.cfg` * :file:`~/mysite/preseed_files/www.example.com.cfg` * :file:`~/mysite/preseed_files/bak.example.com.cfg` 5. You can have a look at them, but you might notice the settings in the file might not be to your liking. In particular, it could be using wrong timezone, defaulting to DHCP for network configuration etc. Let's concentrate on making the network configuration changes - this is the main thing that will probably differ in your environment. Update the preseed configuration file: :file:`~/mysite/group_vars/preseed.yml` :: --- # Public key used to authenticate remote logins via SSH for the # root account. ansible_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}" # Directory where the preseed files will be output to. preseed_directory: "~/mysite/preseed_files/" # Set your default (initial) root password. preseed_root_password: changeit # Use manual network configuration (no DHCP). preseed_network_auto: no # Set the gateway for all servers. preseed_gateway: 10.32.64.1 # Set the netmask for all servers. preseed_netmask: 255.255.255.0 # Set the DNS for all servers. preseed_dns: 10.32.64.1 # Set the domain for all servers. preseed_domain: example.com # Set the server-specific options. preseed_server_overrides: comms.example.com: hostname: comms ip: 10.32.64.19 www.example.com: hostname: www ip: 10.32.64.20 bak.example.com: hostname: bak ip: 10.32.64.23 6. Now re-run the preseed playbook:: workon mysite && ansible-playbook playbooks/preseed.yml 7. The preseed files should have been updated now, and you should have the new customised configuration files in the ``preseed_files`` directory. You can now use these to install the servers. Installing the servers with preseed files ----------------------------------------- You have your preseed files now, so you can go ahead and install the servers ``comms.example.com``, ``www.example.com``, and ``bak.example.com`` using them with network install CD. Have a look at `Debian instructions `_ for more details. If you need to, you can easily serve the preseed files from the Ansible server with Python's built-in HTTP server:: cd ~/mysite/preseed_files/ python3 -m http.server 8000 Then you can point installer to the preseed file selecting the ``Advanced options -> Automated install`` (don't press ``ENTER`` yet), then pressing ``TAB``, and appending the following at the end (just fill-in the correct hostname - ``comms``, ``www``, or ``bak``):: url=http://ansible.example.com:8000/HOSTNAME.example.com.cfg Bootstrapping servers for Ansible set-up ---------------------------------------- In order to effectively use Ansible, a small initial bootstrap always has to be done for managed servers. This mainly involves set-up of Ansible users on the destination machine, and distributing the SSH public keys for authorisation. When you use the preseed configuration files to deploy a server, you get the benefit of having the authorized_keys set-up for the root operating system user, making it easier to bootstrap the machines subsequently via Ansible. Let's bootstrap our machines now: 1. For start, create a dedicated playbook for the bootstrap process. :file:`~/mysite/playbooks/bootstrap.yml` :: --- - hosts: - communications - web - backup remote_user: root roles: - bootstrap 2. The ``bootstrap`` role has only one parameter - an SSH key which should be deployed for the Ansible user on managed server (in the ``authorized_keys`` file). Since this role is applied against all servers, we will use the same value everywhere. Configure the role: :file:`~/mysite/group_vars/all.yml` :: --- ansible_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}" 3. SSH into all machines at least once from the Ansible server in order to store the SSH fingerprints into known hosts file:: ssh root@comms.example.com date && \ ssh root@www.example.com date && \ ssh root@bak.example.com date 4. Now, simply run the bootstrap role against the servers:: workon mysite && ansible-playbook playbooks/bootstrap.yml 6. At this point you won't be able to ssh into the machines with the ``root`` account anymore. You will be able to ssh into the machines using the ``ansible`` user (from the Ansible server). The ``ansible`` user will also be granted ability to run the ``sudo`` commands without providing password. 7. Now you can finally move on to configuring what you really want - common configuration and services for your site. Common server configuration --------------------------- Each server needs to share some common configuration in order to be functioning properly. This includes set-up of some shared accounts, perhaps some hardening etc. .. note:: Should you ever need to limit what hosts can connect to a server for some kind of maintenance or upgrade purposes, the ``common`` role comes with ``maintenance`` and ``maintenance_allowed_hosts`` parameters. See :ref:`rolereference` for more information. Let's take care of this common configuration right away: 1. Create playbook for the communications server: :file:`~/mysite/playbooks/communications.yml` :: --- - hosts: communications remote_user: ansible become: yes roles: - common 2. Create playbook for the web server: :file:`~/mysite/playbooks/web.yml` :: --- - hosts: web remote_user: ansible become: yes roles: - common 3. Create playbook for the backup server: :file:`~/mysite/playbooks/backup.yml` :: --- - hosts: backup remote_user: ansible become: yes roles: - common 4. Create the global site playbook: :file:`~/mysite/playbooks/site.yml` :: --- - import_playbook: preseed.yml - import_playbook: communications.yml - import_playbook: web.yml - import_playbook: backup.yml 5. Time to create configuration for the role. Since this role is supposed to set-up a common base, we'll set-up the variables file that applies to all roles: :file:`~/mysite/group_vars/all.yml` :: --- ansible_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}" os_users: - name: admin uid: 1000 additional_groups: - sudo authorized_keys: - "{{ lookup('file', '~/.ssh/id_rsa.pub') }}" password: "{{ 'admin' | password_hash('sha512') }}" common_packages: - emacs-nox ca_certificates: truststore: "{{ lookup('file', '~/mysite/tls/truststore.pem') }}" .. note:: The ``common`` role comes with ability to set-up time synchronisation using NTP. This is not done by default. For details see the role parameter ``ntp_servers``. .. note:: The ``ca_certificates`` parameter lets us deploy custom CA certificates on servers. The name we pick (in this case ``truststore``) can be set to anything. In this particular case, we want to deploy our own CA certificate for use as truststore, since this is what the services will use to validate server certificates when connecting to each-other. 6. That's all for configuration, time to apply the changes:: workon mysite && ansible-playbook playbooks/site.yml 7. After this you should be able to *ssh* from Ansible server onto the managed servers as user ``admin`` using the *SSH* private key of the ``ansible`` user on controller machine. The ``admin`` user's password has also been set to ``admin``, and the user will be member of ``sudo`` group. .. note:: Remote logins over SSH using password authentication are explicitly disabled as part of common set-up/hardening. Introducing LDAP ---------------- Since some of the services actually depend on LDAP, we'll go ahead and set that one up first. This includes both the LDAP *server* and *client* configuration. 1. Update the playbook for communications server to include the LDAP client and server roles (``ldap_client`` and ``ldap_server``, respectively). :file:`~/mysite/playbooks/communications.yml` :: --- - hosts: communications remote_user: ansible become: yes roles: - common - ldap_client - ldap_server 2. Update the playbook for web server to include the LDAP client role (``ldap_client``). You never know when it might come in handy :) :file:`~/mysite/playbooks/web.yml` :: --- - hosts: web remote_user: ansible become: yes roles: - common - ldap_client 3. Time to configure the roles. For start, let us configure the LDAP server role. Keep in mind that there is a lot of default variables set-up by the role itself, making our config rather short. The ``ldap_server_domain`` parameter will be used to form the base DN of the LDAP directory (resulting in ``dc=example,dc=com``). :file:`~/mysite/group_vars/communications.yml` :: --- ldap_admin_password: admin ldap_server_domain: example.com ldap_server_organization: "Example Inc." ldap_server_tls_certificate: "{{ lookup('file', '~/mysite/tls/comms.example.com_ldap.pem') }}" ldap_server_tls_key: "{{ lookup('file', '~/mysite/tls/comms.example.com_ldap.key') }}" 4. Phew. That was... Well, actually, easy :) Technically, only the LDAP admin password, domain, and TLS certificate/key *must* be set, but it is nice to have organisation explicitly specified as well (instead of using whatever Debian picks as default). Let us add the LDAP client configuration next. We will start off with global LDAP client configuration. In case of the LDAP client role, we have got to be a bit more explicit. :file:`~/mysite/group_vars/all.yml` :: # Observe how we set the base DN. By default the ldap_server role # (defined up there) will use server's domain to form the base for LDAP. ldap_client_config: - comment: Set the base DN option: BASE value: dc=example,dc=com - comment: Set the default URI option: URI value: ldap://comms.example.com/ - comment: Set the LDAP TLS truststore option: TLS_CACERT value: /etc/ssl/certs/truststore.pem - comment: Enforce TLS option: TLS_REQCERT value: demand 5. Ok, so this looks nice and dandy... But, let's have a bit better configuration on the communications server itself. Namely, on that one we should be able to connect to the LDAP server via unix socket instead of TCP. :file:`~/mysite/group_vars/communications.yml` :: ldap_client_config: - comment: Set the base DN option: BASE value: dc=example,dc=com - comment: Set the default URI option: URI value: ldapi:/// - comment: Set the default bind DN, useful for administration. option: BINDDN value: cn=admin,dc=example,dc=com - comment: Set the LDAP TLS truststore option: TLS_CACERT value: /etc/ssl/certs/truststore.pem - comment: Enforce TLS option: TLS_REQCERT value: demand 6. Ok, time to re-run the playbooks again... Wait a minute, something is missing here... Ah, right, we have to generate the TLS private key and issue the X.509 certificate. 1. Create template for the ``certtool`` so it would know what extensions and content to have in the certificate: :file:`~/mysite/tls/comms.example.com_ldap.cfg` :: organization = "Example Inc." country = SE cn = "Exampe Inc. LDAP Server" expiration_days = 365 dns_name = "comms.example.com" tls_www_server signing_key encryption_key 2. Almost there... Now let us generate the key and issue the certificate:: certtool --sec-param normal --generate-privkey --outfile ~/mysite/tls/comms.example.com_ldap.key certtool --generate-certificate --load-ca-privkey ~/mysite/tls/ca.key --load-ca-certificate ~/mysite/tls/ca.pem --template ~/mysite/tls/comms.example.com_ldap.cfg --load-privkey ~/mysite/tls/comms.example.com_ldap.key --outfile ~/mysite/tls/comms.example.com_ldap.pem 7. And now, for the finishing touch, just run the playbooks again:: workon mysite && ansible-playbook playbooks/site.yml Adding mail server ------------------ The next thing in line is to implement the mail server capability. *Majic Ansible Roles* come with two distinct mail server-related roles. One for setting-up a mail server host (with authenticated IMAP, SMTP, mail storage etc), and one for setting-up a local SMTP mail forwarder (for having the rest of your servers relay their mails to the mail server host). .. note:: Should you ever need to deploy the forwarder role on a laptop or machine behind NAT, make sure to look at the ``smtp_from_relay_allowed`` parameter. In case you need to connect to the SMTP relay via non-standard port (for example to work-around ISP blocks), have a look at the ``smtp_relay_host_port`` parameter. The mail server role looks-up available mail domains, users, and aliases in the LDAP directory. This has already been set-up on the server ``comms.example.com``, but some changes will be required. 1. Update the playbook for communications server to include the mail server role. :file:`~/mysite/playbooks/communications.yml` :: --- - hosts: communications remote_user: ansible become: yes roles: - common - ldap_client - ldap_server - mail_server 2. Let's configure the role next. :file:`~/mysite/group_vars/communications.yml` :: # Set the LDAP URL to connect through. Keep in mind TLS is required. mail_ldap_url: ldap://comms.example.com/ # Here we need to point to the base DN of LDAP server. A bunch of entries # will need to exist under it for service to function correctly, though. mail_ldap_base_dn: dc=example,dc=com # Separate LDAP entries are used for Postfix/Dovecot # authentication. Therefore we have two passwords here. mail_ldap_postfix_password: postfix mail_ldap_dovecot_password: dovecot # Setting uid/gid is optional, but you might have a policy on how to # assign UIDs and GIDs, so it is convenient to be able to change this. mail_user_uid: 5000 mail_user_gid: 5000 # Set private keys and certificates to use for the IMAP service. imap_tls_certificate: "{{ lookup('file', '~/mysite/tls/comms.example.com_imap.pem') }}" imap_tls_key: "{{ lookup('file', '~/mysite/tls/comms.example.com_imap.key') }}" # Set private keys and certificates to use for the SMTP service. smtp_tls_certificate: "{{ lookup('file', '~/mysite/tls/comms.example.com_smtp.pem') }}" smtp_tls_key: "{{ lookup('file', '~/mysite/tls/comms.example.com_smtp.key') }}" # Set the X.509 certificate truststore to use for validating the # LDAP server certificate. mail_ldap_tls_truststore: "{{ lookup('file', '~/mysite/tls/truststore.pem') }}" 3. There are two distinct mail services that need to access the LDAP directory - *Postfix* (serving as an SMTP server), and *Dovecot* (serving as an IMAP server). These two need their own dedicated LDAP entries on the LDAP server in order to log-in. Luckily, it is easy to create such entries through the options provided by the LDAP server role. In addition to this, the Postfix and Dovecot services will check if users are members of ``mail`` group in LDAP directory before accepting them as valid mail users. Once again, the LDAP server role comes with a simple option for creating groups. :file:`~/mysite/group_vars/communications.yml` :: # Don't forget, the passwords here must match with passwords specified # under options mail_ldap_postfix_password/mail_ldap_dovecot_password. ldap_server_consumers: - name: postfix password: postfix - name: dovecot password: dovecot ldap_server_groups: - name: mail 4. Ok, so now our SMTP and IMAP service can log-in into the LDAP server to look-up the mail server information. We have also defined the mail group for limitting which users get mail service. However, we don't have any user/domain information yet. So let's change that, using the ``ldap_entries`` option from LDAP server role. .. warning:: Long-term, you probably want to manage these entries manually or through other means than the ``ldap_entries`` option. The reason for this is because this type of data in LDAP directory can be considered more of an operational/application data than configuration data that frequently changes (especially the user passwords/info). Backups of LDAP directory on regular basis are important. We will get to that at a later point. :file:`~/mysite/group_vars/communications.yml` :: ldap_entries: # Create first a couple of user entries. Don't forget to set the # "mail" attribute for them. - dn: uid=johndoe,ou=people,dc=example,dc=com attributes: objectClass: - inetOrgPerson uid: johndoe cn: John Doe sn: Doe userPassword: johndoe mail: john.doe@example.com - dn: uid=janedoe,ou=people,dc=example,dc=com attributes: objectClass: - inetOrgPerson uid: janedoe cn: Jane Doe sn: Doe userPassword: janedoe mail: jane.doe@example.com # Let's register our domain in LDAP directory. - dn: dc=example.com,ou=domains,ou=mail,ou=services,dc=example,dc=com attributes: objectClass: dNSDomain dc: "example.com" # Finally, for the lolz, let's also add the standard postmaster alias # for our domain. This one will also receive any undeliverable bounced # mails. - dn: cn=postmaster@example.com,ou=aliases,ou=mail,ou=services,dc=example,dc=com attributes: objectClass: nisMailAlias cn: postmaster@example.com rfc822MailMember: john.doe@example.com 5. Once again, before we apply the configuration, we must make sure the necessary TLS private keys and certificates are available. In this particular case, we need to set-up separate key/certificate pair for both the SMTP and IMAP service: 1. Create new templates for ``certtool``: :file:`~/mysite/tls/comms.example.com_smtp.cfg` :: organization = "Example Inc." country = SE cn = "Exampe Inc. SMTP Server" expiration_days = 365 dns_name = "comms.example.com" tls_www_server signing_key encryption_key :file:`~/mysite/tls/comms.example.com_imap.cfg` :: organization = "Example Inc." country = SE cn = "Exampe Inc. IMAP Server" expiration_days = 365 dns_name = "comms.example.com" tls_www_server signing_key encryption_key 2. Create the keys and certificates for SMTP/IMAP services based on the templates:: certtool --sec-param normal --generate-privkey --outfile ~/mysite/tls/comms.example.com_smtp.key certtool --generate-certificate --load-ca-privkey ~/mysite/tls/ca.key --load-ca-certificate ~/mysite/tls/ca.pem --template ~/mysite/tls/comms.example.com_smtp.cfg --load-privkey ~/mysite/tls/comms.example.com_smtp.key --outfile ~/mysite/tls/comms.example.com_smtp.pem certtool --sec-param normal --generate-privkey --outfile ~/mysite/tls/comms.example.com_imap.key certtool --generate-certificate --load-ca-privkey ~/mysite/tls/ca.key --load-ca-certificate ~/mysite/tls/ca.pem --template ~/mysite/tls/comms.example.com_imap.cfg --load-privkey ~/mysite/tls/comms.example.com_imap.key --outfile ~/mysite/tls/comms.example.com_imap.pem 6. Configuration and TLS keys have ben set-up, so it is time to apply the changes:: workon mysite && ansible-playbook playbooks/site.yml 7. Let's add the two users to the mail group (otherwise, the mail server will ignore them). We'll use the ``ldap_attr`` module directly to make our life a bit easier:: workon mysite && ansible --become -m ldap_attr -a "dn=cn=mail,ou=groups,dc=example,dc=com state=present name=uniqueMember values=uid=johndoe,ou=people,dc=example,dc=com" communications workon mysite && ansible --become -m ldap_attr -a "dn=cn=mail,ou=groups,dc=example,dc=com state=present name=uniqueMember values=uid=janedoe,ou=people,dc=example,dc=com" communications 8. If no errors have been reported, at this point you should have two mail accounts - ``john.doe@example.com``, with password ``johndoe``, and ``jane.doe@example.com``, with password ``janedoe``. In this particular set-up, the mail addresses are used as usernames. If you want to test it out, simply install ``swaks`` on your Ansible machine, and run something along the lines of :: swaks --to john.doe@example.com --server comms.example.com swaks --to jane.doe@example.com --server comms.example.com Of course, free feel to also test out the mail server using any mail client of your choice. When doing so, use port 587 for SMTP. Port 25 is reserved for unauthenticated server-to-server mail deliveries. If you face issues with ISPs or hotels blocking the two ports listed above, you can also use alternative ports 26 (redirected to port 587) and 27 (redirected to port 25). TLS has also been hardened on port 587 to allow only TLSv1.2 and PFS ciphers (you can override TLS versions/ciphers via role configuration). TLS configuration on port 25 has been left unchanged for maximum interoperability with other servers. Setting-up mail relaying from web and backup servers ---------------------------------------------------- With the mail server set-up, the next thing to do would be to set-up the SMTP server on web and backup servers to relay mails via the communications server. This way we can make sure that mail that gets sent via local SMTP to external addresses on those two servers goes through our anti-virus scanner. 1. Update the list of roles for web and backup server to include the mail forwarder role. :file:`~/mysite/playbooks/web.yml` :: --- - hosts: web remote_user: ansible become: yes roles: - common - ldap_client - mail_forwarder :file:`~/mysite/playbooks/backup.yml` :: --- - hosts: backup remote_user: ansible become: yes roles: - common - mail_forwarder 2. The next thing is to set-up the configuration for the new role. We can define this globally for all servers :file:`~/mysite/group_vars/all.yml` :: # Define what X.509 certificates should be used for validating # the certificate of server we are relaying the mails through. smtp_relay_truststore: "{{ lookup('file', '~/mysite/tls/truststore.pem') }}" # Make sure any mails directed to localhost root account get # forwarded to one of our mail users as well. local_mail_aliases: root: root john.doe@example.com # Now signal the local SMTP to relay any non-local mails via our # communications server. Don't forget to specify your own IP address (or # FQDN) here. Without this option, the SMTP would send out the mails # directly. smtp_relay_host: comms.example.com 3. Although we have told our web and backup servers to use the communications server as relay for non-local mail, the communications server is not aware of this. This would result in the communications server refusing all relay attempts (if not, it would be an open relay, which is bad). So, let's fix this a bit - we have a configuration option for the mail server for exactly this purpose. :file:`~/mysite/group_vars/communications.yml` :: # We want to allow relaying of mails from our web and backup servers # here.Beware the IP spoofing, though! Don't forget to change the bellow # IP for your server ;) smtp_allow_relay_from: - 10.32.64.20 - 10.32.64.23 4. Let's apply the changes:: workon mysite && ansible-playbook playbooks/site.yml 5. After this you may want to test out sending mail via web or backup server's local SMTP to the root user (to see if the aliasing works), and to some external mail address to check if forwarding works correctly too. Run something similar to the following on your web server:: swaks --to root@localhost --server localhost swaks --to YOUR_MAIL --server localhost If all went well, you should be able to see a new mail in John Doe's mailbox, as well as your own mailbox. Adding XMPP server ------------------ Now that the users can communicate via mail server, we might as well add support for some instant messaging. For this purpose, we will use the ``xmpp_server`` role. 1. Update the playbook for communications server to include the XMPP server role. :file:`~/mysite/playbooks/communications.yml` :: --- - hosts: communications remote_user: ansible become: yes roles: - common - ldap_client - ldap_server - mail_server - xmpp_server 2. Configure the role. :file:`~/mysite/group_vars/communications.yml` :: # Set the TLS private key and certificate. xmpp_tls_certificate: "{{ lookup('file', '~/mysite/tls/comms.example.com_xmpp.pem') }}" xmpp_tls_key: "{{ lookup('file', '~/mysite/tls/comms.example.com_xmpp.key') }}" # Set one of the users to also be an XMPP administrator. xmpp_administrators: - john.doe@example.com # Unfortunately, XMPP can't look-up domains via LDAP, so we need to be # explicit here. xmpp_domains: - example.com # Simply point the XMPP server to base DN of LDAP server, and let it use # specific directory structure it expects. xmpp_ldap_base_dn: dc=example,dc=com # Password for logging-in into the LDAP directory. xmpp_ldap_password: prosody # Where the LDAP server is located at. Full-blown LDAP URIs are _not_ # supported! xmpp_ldap_server: comms.example.com 3. Now, like in case of the mail server role, we need to set-up authentication for the XMPP service. In this particular case a single consumer is present - Prosody itself. We should also create the group for granting the users right to use the service. :file:`~/mysite/group_vars/communications.yml` :: # Just make sure the new entry is added for the prosody user - you can # leave the postfix/dovecot intact in your file if you use different # passwords. Keep in mind password for prosody user must match with # password specified under xmpp_ldap_password. ldap_server_consumers: - name: postfix password: postfix - name: dovecot password: dovecot - name: prosody password: prosody # And simply append a new group here... ldap_server_groups: - name: mail - name: xmpp 4. Do you know what comes next? Yes! Create some more TLS private keys and certificates, this time for our XMPP server ;) 1. Create new template for ``certtool``: :file:`~/mysite/tls/comms.example.com_xmpp.cfg` :: organization = "Example Inc." country = SE cn = "Exampe Inc. XMPP Server" expiration_days = 365 dns_name = "example.com" tls_www_server signing_key encryption_key 2. Create the keys and certificates for XMPP service based on the template:: certtool --sec-param normal --generate-privkey --outfile ~/mysite/tls/comms.example.com_xmpp.key certtool --generate-certificate --load-ca-privkey ~/mysite/tls/ca.key --load-ca-certificate ~/mysite/tls/ca.pem --template ~/mysite/tls/comms.example.com_xmpp.cfg --load-privkey ~/mysite/tls/comms.example.com_xmpp.key --outfile ~/mysite/tls/comms.example.com_xmpp.pem 5. Apply the changes:: workon mysite && ansible-playbook playbooks/site.yml 6. Ok, configuration of the role is complete. You may have noticed that we still haven't added any users to the new LDAP group called "xmpp". So let us correct this in similar way as we did for the mail server. Since we have the user entries already, no need to recreate them here. We will just update the group membership instead. .. warning:: Same warning applies here as for mail server role for managing the user/group entries! Scroll up and re-read it if you missed it! :: workon mysite && ansible --become -m ldap_attr -a "dn=cn=xmpp,ou=groups,dc=example,dc=com state=present name=uniqueMember values=uid=johndoe,ou=people,dc=example,dc=com" communications workon mysite && ansible --become -m ldap_attr -a "dn=cn=xmpp,ou=groups,dc=example,dc=com state=present name=uniqueMember values=uid=janedoe,ou=people,dc=example,dc=com" communications 7. If no errors have been reported, at this point you should have two users capable of using the XMPP service - one with username ``john.doe@example.com`` and one with username ``jane.doe@example.com``. Same passwords are used as for when you were creating the two users for mail server. For testing you can turn to your favourite XMPP client (I don't know of any quick CLI-based tools to test the XMPP server functionality, unfortunately, but you could try using `mcabber `_). Taking a step back - preparing for web server --------------------------------------------- Up until now the usage instructions have dealt almost exclusively with the communications server. That is, we haven't done anything beyond the basic set-up of the other servers. Let us first define what we want to deploy on the web server. Here is the plan: 1. First off, we will set-up the web server. This will be necessary no matter what web application we decide to deploy later on. 2. Next, we will set-up a database server. Why? Well, most web applications need to use some sort of database to store all the data, so we might as well try to take that one out of the way. 3. With this basic deployment for a web server in place, we can move on to setting-up a couple of web applications. For the purpose of the usage instructions, we will deploy the following two: 1. `The Bug Genie `_ - an issue tracker. To keep things simple, we will not integrate it with our LDAP server (although this is supported and possible). Being written in PHP, this will demonstrate the role for PHP web application deployment. 2. `Django Wiki `_ - a wiki application written in Django. This will serve as a demo of how the WSGI role works. It should be noted that the web application deployment roles are a bit more complex - namely they are not meant to be used directly, but instead as a dependency for a custom role. They do come with decent amount of batteries included, and also play nice with the web server role. As mentioned before, all roles will enforce TLS by default. The web server roles will additionaly implement HSTS policy by sending connecting clients ``Strict-Transport-Security`` header with value set to ``max-age=31536000; includeSubDomains``. With all the above noted, let us finally move on to the next step. Setting-up the web server ------------------------- Finally we are moving on to the web server deployment, and we shell start with... Well, erm, web server deployment! To be more precise, we will set-up Nginx. 1. Update the playbook for web server to include the web server role. :file:`~/mysite/playbooks/web.yml` :: --- - hosts: web remote_user: ansible become: yes roles: - common - ldap_client - mail_forwarder - web_server 2. You know the drill, role configuration comes up next. No configuration has been deployed before for the web server, so we will be creating a new file. Only the TLS parameters are really necessary, but we'll spice things up a bit by setting custom title and message for default virtual host. :file:`~/mysite/group_vars/web.yml` :: --- default_https_tls_certificate: "{{ lookup('file', '~/mysite/tls/www.example.com_https.pem') }}" default_https_tls_key: "{{ lookup('file', '~/mysite/tls/www.example.com_https.key') }}" web_default_title: "Welcome to default page!" web_default_message: "Nothing to see here, move along..." 3. The only thing left now is to create the TLS private key/certificate pair that should be used for default virtual host. 1. Create new template for ``certtool``: :file:`~/mysite/tls/www.example.com_https.cfg` :: organization = "Example Inc." country = SE cn = "Exampe Inc. Web Server" expiration_days = 365 dns_name = "www.example.com" tls_www_server signing_key encryption_key 2. Create the keys and certificates for default web server virtual host based on the template:: certtool --sec-param normal --generate-privkey --outfile ~/mysite/tls/www.example.com_https.key certtool --generate-certificate --load-ca-privkey ~/mysite/tls/ca.key --load-ca-certificate ~/mysite/tls/ca.pem --template ~/mysite/tls/www.example.com_https.cfg --load-privkey ~/mysite/tls/www.example.com_https.key --outfile ~/mysite/tls/www.example.com_https.pem 4. Apply the changes:: workon mysite && ansible-playbook playbooks/site.yml 5. If no errors have been reported, at this point you should have a default web page available and visible at https://www.example.com/ . By default plaintext connections are disabled, and trying to visit http://www.example.com/ should simply redirect you to the HTTPS address. Feel free to try it out with some browser. Keep in mind you will get a warning about the untrusted certificate! Adding the database server -------------------------- Since both of the web applications we want to deploy need a database, we will proceed to set-up the database server role on the web server itself. *Majic Ansible Roles* in particular come with a role that will deploy MariaDB database server. .. note:: The ``database_server`` role will set-up unix socket authentication for the database ``root`` user. I.e. the ``root`` database user will have no password set, but authentication will pass only when logging-in as the operating system ``root`` user while connecting over database server unix socket. 1. Update the playbook for web server to include the database server role. :file:`~/mysite/playbooks/web.yml` :: --- - hosts: web remote_user: ansible become: yes roles: - common - ldap_client - mail_forwarder - web_server - database_server 2. This particular role has no parameters, and no additional steps are necessary to configure it. So move along... 3. No TLS support has been implemented for this role (yet), so simply apply the changes:: workon mysite && ansible-playbook playbooks/site.yml 4. If no errors have been reported, you should have a database server up and running on the web server. You should be able to log-in as ``root`` operating system user by running the following command on the web server itself:: mysql Of course, no database has been created for either of the web applications, but we will get to that one later (there is a dedicated ``database`` role which can be combined with web app roles for this purpose). Deploying a PHP web application (The Bug Genie) ----------------------------------------------- We have some basic infrastructure up and running on our web server, so now we can move on to setting-up a PHP web application on it. As mentioned before, we will take *The Bug Genie* as an example. For this we will create a local role in our site to take care of it. This role will in turn utilise two roles coming from *Majic Ansible Roles* that will make our life (a little) easier. To make the example a bit simpler, no parameters will be introduced for this role (not even the password for database, we'll hard-code everything). Before we start, here is a couple of useful pointers regarding the ``php_website`` role we'll be using for the PHP part: * The role is designed to execute every application via dedicated user and group. The user/group name is automatically derived from the FQDN of website, for example ``web-tbg_example_com``. * While running the application, application user's umask is set to ``0007`` (letting the administrator user be able to manage any files created while the application is running). * An administrative user is created as well, and this user should be used when running maintenance and installation commands. Similar to application user, the name is also derived from the FQDN of website, for example ``admin-tbg_example_com``. Administrative user does not have a dedicated group, and instead belongs to same group as the application user. * PHP applications are executed via FastCGI, using *PHP-FPM*. * If you ever need to set some additional PHP FPM settings, this can easily be done via the ``additional_fpm_config`` role parameter. This particular example does not set any, though. * Mails delivered to local admin/application users are forwarded to ``root`` account instead (this can be configured via ``website_mail_recipients`` role parameter. * If you ever find yourself mixing-up test and production websites, have a look at ``environment_indicator`` role parameter. It lets you insert small strip with environment information at bottom of each HTML page served by the web server. * Static content (non-PHP) is served directly by *Nginx*. * Each web application gets distinct sub-directory under ``/var/www``, named after the FQDN. All sub-directories created under there are created with ``02750`` permissions, with ownership set to admin user, and group set to the application's group. In other words, all directories will have ``SGID`` bit set, allowing you to create files/directories that will have their group automatically set to the group of the parent directory. * Files are served (both by *Nginx* and *PHP-FPM*) from sub-directory called ``htdocs`` (located in website directory). For example ``/var/www/tbg.example.com/htdocs/``. Normally, this can be a symlink to some other sub-directory within the website directory (useful for having multiple versions for easier downgrades etc). * Combination of admin user membership in application group, ``SGID`` permission, and the way ownership of sub-directories is set-up usually means that the administrator will be capable of managing application files, and application can be granted write permissions to a *minimum* of necessary files. .. warning:: Just keep in mind that some file-management commands, like ``mv``, do *not* respect the ``SGID`` bit. In fact, I would recommend using ``cp`` when you deploy new files to the directory instead (don't simply move them from your home directory). 1. Start-off with creating the necessary directories for the new role:: mkdir -p ~/mysite/roles/tbg/{tasks,meta,files}/ 2. Let's set-up role dependencies, reusing some common roles to make our life easier. :file:`~/mysite/roles/tbg/meta/main.yml` :: --- dependencies: # Ok, so this role helps us set-up Nginx virtual host for serving our # app. - role: php_website # Our virtual host will for PHP website will respond to this name. fqdn: tbg.example.com # TLS key and certificate to use for the virtual host. https_tls_certificate: "{{ lookup('file', '~/mysite/tls/tbg.example.com_https.pem') }}" https_tls_key: "{{ lookup('file', '~/mysite/tls/tbg.example.com_https.key') }}" # Some additional packages are required in order to deploy and use TBG. packages: - php-gd - php-curl - php-mbstring - php-xml - git - php-mysql - php-apcu - php-zip # Set-up URL rewriting. This is based on public/.htaccess file from # TBG. php_rewrite_urls: - ^(.*)$ /index.php?url=$1 # We don't necessarily need this, but in case you have a policy on # uid/gid usage, this is useful. Take note that below value is used # for both the dedicated uid and gid for application user. uid: 2000 admin_uid: 3000 # And this role sets up a new dedicated database for our web # application. - role: database # This is both the database name, _and_ name of the database user # that will be granted full privileges on the database. db_name: tbg # This will be the password of our user 'tbg' for accessing the # database. Take note the user can only login from localhost. db_password: tbg 3. Now for my favourite part again - creating private keys and certificates! Why? Because the ``php_website`` role requires a private key/certificate pair to be deployed. So... Moving on: 1. Create new template for ``certtool``: :file:`~/mysite/tls/tbg.example.com_https.cfg` :: organization = "Example Inc." country = SE cn = "Exampe Inc. Issue Tracker" expiration_days = 365 dns_name = "tbg.example.com" tls_www_server signing_key encryption_key 2. Create the keys and certificates for the application:: certtool --sec-param normal --generate-privkey --outfile ~/mysite/tls/tbg.example.com_https.key certtool --generate-certificate --load-ca-privkey ~/mysite/tls/ca.key --load-ca-certificate ~/mysite/tls/ca.pem --template ~/mysite/tls/tbg.example.com_https.cfg --load-privkey ~/mysite/tls/tbg.example.com_https.key --outfile ~/mysite/tls/tbg.example.com_https.pem 4. Time to get our hands a bit more dirty... Up until now we didn't have to write custom tasks, but at this point we need to. :file:`~/mysite/roles/tbg/tasks/main.yml` :: --- - name: Define TBG version set_fact: tbg_version: "4.3.1" tbg_archive_checksum: "45de72b1ef82142ad46686577d593375ba370156df4367d17386b4e26a37f342" - name: Download the TBG archive get_url: url: "https://github.com/thebuggenie/thebuggenie/archive/v{{ tbg_version }}.tar.gz" dest: "/var/www/tbg.example.com/thebuggenie-{{ tbg_version }}.tar.gz" sha256sum: "{{ tbg_archive_checksum }}" become: yes become_user: admin-tbg_example_com - name: Download Composer get_url: url: "https://getcomposer.org/download/1.10.19/composer.phar" dest: "/usr/local/bin/composer" sha256sum: "688bf8f868643b420dded326614fcdf969572ac8ad7fbbb92c28a631157d39e8" owner: root group: root mode: 0755 - name: Unpack TBG unarchive: src: "/var/www/tbg.example.com/thebuggenie-{{ tbg_version }}.tar.gz" dest: "/var/www/tbg.example.com/" copy: no creates: "/var/www/tbg.example.com/thebuggenie-{{ tbg_version }}" become: yes become_user: admin-tbg_example_com - name: Allow use of lib-pcre version 10 (since PHP is built against it in Debian Buster) lineinfile: dest: "/var/www/tbg.example.com/thebuggenie-{{ tbg_version }}/{{ item }}" state: present regexp: '.*"lib-pcre".*' line: ' "lib-pcre": ">=8.0",' with_items: - "composer.json" - "composer.lock" - name: Create directory for storing uploaded files file: path: "/var/www/tbg.example.com/files" state: directory mode: 02770 become: yes become_user: admin-tbg_example_com - name: Create symlink towards directory where uploaded files are stored file: src: "/var/www/tbg.example.com/files" dest: "/var/www/tbg.example.com/thebuggenie-{{ tbg_version }}/files" state: link become: yes become_user: admin-tbg_example_com - name: Create TBG cache directory file: path: "/var/www/tbg.example.com/thebuggenie-{{ tbg_version }}/cache" state: directory mode: 02770 become: yes become_user: admin-tbg_example_com - name: Set-up the necessary write permissions for TBG directories file: path: "{{ item }}" mode: g+w with_items: - /var/www/tbg.example.com/thebuggenie-{{ tbg_version }}/ - /var/www/tbg.example.com/thebuggenie-{{ tbg_version }}/public/ - /var/www/tbg.example.com/thebuggenie-{{ tbg_version }}/core/config/ - name: Create symbolic link to TBG application file: src: "/var/www/tbg.example.com/thebuggenie-{{ tbg_version }}/public" path: "/var/www/tbg.example.com/htdocs" state: link owner: "admin-tbg_example_com" group: "web-tbg_example_com" become: yes become_user: admin-tbg_example_com - name: Install TBG dependencies composer: command: install working_dir: "/var/www/tbg.example.com/thebuggenie-{{ tbg_version }}" become: yes become_user: admin-tbg_example_com - name: Deploy database configuration file for TBG copy: src: "b2db.yml" dest: "/var/www/tbg.example.com/thebuggenie-{{ tbg_version }}/core/config/b2db.yml" mode: 0640 owner: admin-tbg_example_com group: web-tbg_example_com - name: Install pexpect package apt: name: python3-pexpect state: present - name: Deploy expect script for installing TBG copy: src: "tbg_expect_install" dest: "/var/www/tbg.example.com/tbg_expect_install" mode: 0750 owner: admin-tbg_example_com group: web-tbg_example_com - name: Run TBG installer via expect script command: /var/www/tbg.example.com/tbg_expect_install args: chdir: "/var/www/tbg.example.com/thebuggenie-{{ tbg_version }}" creates: "/var/www/tbg.example.com/thebuggenie-{{ tbg_version }}/installed" become: yes become_user: admin-tbg_example_com 5. Set-up the files that are deployed by our role. :file:`~/mysite/roles/tbg/files/b2db.yml` :: b2db: username: "tbg" password: "tbg" dsn: "mysql:host=localhost;dbname=tbg" tableprefix: '' cacheclass: '\thebuggenie\core\framework\Cache' :file:`~/mysite/roles/tbg/files/tbg_expect_install` :: #!/usr/bin/env python3 import pexpect # Spawn the process. install_process = pexpect.spawnu('./tbg_cli', args = ["install", "--accept_license=yes", "--url_subdir=/", "--use_existing_db_info=yes", "--enable_all_modules=no", "--setup_htaccess=yes"]) # If we get EOF, we probably already installed application, and ran # into error at the end since no patterns matched. try: # First confirmation. install_process.expect(u'Press ENTER to continue with the installation: ', timeout=5) install_process.sendline(u'') # Second confirmation. install_process.expect(u'Press ENTER to continue: ', timeout=5) install_process.sendline(u'') # Wait for application to finish. install_process.expect(pexpect.EOF, timeout=60) except pexpect.EOF as e: pass # Close application. install_process.close() # Print text output. print(install_process.before) # Return same exit code like child process. exit(install_process.exitstatus) 6. And... Let's add the new role to our web server. :file:`~/mysite/playbooks/web.yml` :: --- - hosts: web remote_user: ansible become: yes roles: - common - ldap_client - mail_forwarder - web_server - database_server - tbg 7. Apply the changes:: workon mysite && ansible-playbook playbooks/site.yml 8. At this point *The Bug Genie* has been installed, and you should be able to open the URL https://tbg.example.com/ and log-in into *The Bug Genie* with username ``administrator`` and password ``admin``. Deploying a WSGI application (Django Wiki) ------------------------------------------ Next thing up will be to deploy a WSGI Python application. Similar to the PHP application deployment, we will use a couple of roles to make it easier to deploy it in a standardised manner, and we will not have any kind of parameters for configuring the role to keep things simple. Most of the notes on how a ``php_website`` role is deployed also stand for the ``wsgi_website`` role, but we will reiterate and clarify them a bit just to be on the safe side: * The role is designed to execute every application via dedicated user and group. The user/group name is automatically derived from the FQDN of website, for example ``web-wiki_example_com``. * While running the application, application user's umask is set to ``0007`` (letting the administrator user be able to manage any files created while the application is running). * An administrative user is created as well, and this user should be used when running maintenance and installation commands. Similar to application user, the name is also derived from the FQDN of website, for example ``admin-wiki_example_com``. Administrative user does not have a dedicated group, and instead belongs to same group as the application user. As convenience, whenever you switch to this user the Python virtual environment will be automatically activated for you. * WSGI applications are executed via *Gunicorn*. The WSGI server listens on a Unix socket, making the socket accessible by *Nginx*. * If you ever need to set some environment variables, this can easily be done via the ``environment_variables`` role parameter. This particular example does not set any, though. * You can also specify headers to be passed on via Nginx ``proxy_set_header`` directive to Gunicorn running the application. * Mails deliverd to local admin/application users are forwarded to ``root`` account instead (this can be configured via ``website_mail_recipients`` role parameter. * If you ever find yourself mixing-up test and production websites, have a look at ``environment_indicator`` role parameter. It lets you insert small strip with environment information at bottom of each HTML page served by the web server. * Static content is served directly by *Nginx*. * Each web application gets distinct sub-directory under ``/var/www``, named after the FQDN. All sub-directories created under there are created with ``2750`` permissions, with ownership set to admin user, and group set to the application's group. In other words, all directories will have ``SGID`` bit set, allowing you to create files/directories that will have their group automatically set to the group of the parent directory. * Each WSGI website gets a dedicated virtual environment, stored in the sub-directory ``virtualenv`` of the website directory, for example ``/var/www/wiki.example.com/virtualenv``. * Static files are served from sub-directory ``htdocs`` in the website directory, for example ``/var/www/wiki.example.com/htdocs/``. * The base directory where your website/application code should be at is expected to be in sub-directory ``code`` in the website directory, for example ``/var/www/wiki.example.com/code/``. * Combination of admin user membership in application group, ``SGID`` permission, and the way ownership of sub-directories is set-up usually means that the administrator will be capable of managing application files, and application can be granted write permissions to a *minimum* of necessary files. .. warning:: Just keep in mind that some file-management commands, like ``mv``, do *not* respect the ``SGID`` bit. In fact, I would recommend using ``cp`` when you deploy new files to the directory instead (don't simply move them from your home directory). 1. Set-up the necessary directories first:: mkdir -p ~/mysite/roles/wiki/{tasks,meta,files,handlers}/ 2. Set-up some role dependencies, reusing the common role infrastructure. :file:`~/mysite/roles/wiki/meta/main.yml` :: --- dependencies: - role: wsgi_website fqdn: wiki.example.com # TLS key and certificate to use for the virtual host. https_tls_certificate: "{{ lookup('file', '~/mysite/tls/wiki.example.com_https.pem') }}" https_tls_key: "{{ lookup('file', '~/mysite/tls/wiki.example.com_https.key') }}" # In many cases you need to have some development packages available # in order to build Python packages installed via pip packages: - build-essential - python3-dev - libjpeg62-turbo - libjpeg-dev - libpng16-16 - libpng-dev - libmariadb-dev - libmariadb-dev-compat # Specify that Python 3 should be used for the application python_version: 3 # Here we specify that anything accessing our website with "/static/" # URL should be treated as request to a static file, to be served # directly by Nginx instead of the WSGI server. static_locations: - /static/ # Again, not mandatory, but it is good to have some sort of policy # for assigning UIDs. uid: 2001 admin_uid: 3001 # These are additional packages that should be installed in the # virtual environment. virtualenv_packages: - django~=2.2.0 - wiki~=0.5.0 - mysqlclient # This is the name of the WSGI application to # serve. wiki_example_com.wsgi will be the Python "module" that is # accesed, while application is the object instantiated within it (the # application itself). The module is referenced relative to the code # directory (in our case /var/www/wiki.example.com/code/). wsgi_application: wiki_example_com.wsgi:application # Specify explicitly requirements for installing Gunicorn. wsgi_requirements: - gunicorn==20.0.4 wsgi_requirements_in: - gunicorn - role: database db_name: wiki db_password: wiki 3. Let's create a dedicated private key/certificate pair for the wiki website: 1. Create new template for ``certtool``: :file:`~/mysite/tls/wiki.example.com_https.cfg` :: organization = "Example Inc." country = SE cn = "Exampe Inc. Wiki" expiration_days = 365 dns_name = "wiki.example.com" tls_www_server signing_key encryption_key 2. Create the keys and certificates for the application:: certtool --sec-param normal --generate-privkey --outfile ~/mysite/tls/wiki.example.com_https.key certtool --generate-certificate --load-ca-privkey ~/mysite/tls/ca.key --load-ca-certificate ~/mysite/tls/ca.pem --template ~/mysite/tls/wiki.example.com_https.cfg --load-privkey ~/mysite/tls/wiki.example.com_https.key --outfile ~/mysite/tls/wiki.example.com_https.pem 4. At this point we have exhausted what we can do with the built-in roles. Time to add some custom tasks. :file:`~/mysite/roles/wiki/tasks/main.yml` :: --- - name: Create Django project directory file: dest: "/var/www/wiki.example.com/code" state: directory owner: admin-wiki_example_com group: web-wiki_example_com mode: 02750 - name: Start Django project for the Wiki website command: "/var/www/wiki.example.com/virtualenv/bin/exec django-admin.py startproject wiki_example_com /var/www/wiki.example.com/code" args: chdir: "/var/www/wiki.example.com" creates: "/var/www/wiki.example.com/code/wiki_example_com" become: yes become_user: admin-wiki_example_com - name: Deploy settings for wiki website copy: src: "{{ item }}" dest: "/var/www/wiki.example.com/code/wiki_example_com/{{ item }}" mode: 0640 owner: admin group: web-wiki_example_com with_items: - settings.py - urls.py notify: - Restart wiki - name: Deploy project database and deploy static files django_manage: command: "{{ item }}" app_path: "/var/www/wiki.example.com/code/" virtualenv: "/var/www/wiki.example.com/virtualenv/" become: yes become_user: admin-wiki_example_com with_items: - migrate - collectstatic - name: Deploy the superadmin creation script copy: src: "create_superadmin.py" dest: "/var/www/wiki.example.com/code/create_superadmin.py" owner: admin-wiki_example_com group: web-wiki_example_com mode: 0750 - name: Create initial superuser command: "/var/www/wiki.example.com/virtualenv/bin/exec ./create_superadmin.py" args: chdir: "/var/www/wiki.example.com/code/" become: yes become_user: admin-wiki_example_com register: wiki_superuser changed_when: "wiki_superuser.stdout == 'Created superuser.'" :file:`~/mysite/roles/wiki/handlers/main.yml` :: --- - name: Restart wiki service: name: wiki.example.com state: restarted 5. There is a couple of files that we are deploying through the above tasks. Let's create them as well. :file:`~/mysite/roles/wiki/files/settings.py` :: """ Django settings for wiki_example_com project. For more information on this file, see https://docs.djangoproject.com/en/2.2/topics/settings/ For the full list of settings and their values, see https://docs.djangoproject.com/en/2.2/ref/settings/ """ import os from django.urls import reverse_lazy # Build paths inside the project like this: os.path.join(BASE_DIR, ...) BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = '8rok13az%bqtb=ya&s9sia_x*@@rhd9a%g=!6nh4tb!g14rlt^' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = False ALLOWED_HOSTS = ["wiki.example.com", "localhost"] # Application definition INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.sites.apps.SitesConfig', 'django.contrib.humanize.apps.HumanizeConfig', 'django_nyt.apps.DjangoNytConfig', 'mptt', 'sekizai', 'sorl.thumbnail', 'wiki.apps.WikiConfig', 'wiki.plugins.attachments.apps.AttachmentsConfig', 'wiki.plugins.notifications.apps.NotificationsConfig', 'wiki.plugins.images.apps.ImagesConfig', 'wiki.plugins.macros.apps.MacrosConfig', ] MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] ROOT_URLCONF = 'wiki_example_com.urls' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.contrib.auth.context_processors.auth', 'django.template.context_processors.debug', 'django.template.context_processors.i18n', 'django.template.context_processors.media', 'django.template.context_processors.request', 'django.template.context_processors.static', 'django.template.context_processors.tz', 'django.contrib.messages.context_processors.messages', "sekizai.context_processors.sekizai", ], }, }, ] WSGI_APPLICATION = 'wiki_example_com.wsgi.application' # Database # https://docs.djangoproject.com/en/2.2/ref/settings/#databases DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'wiki', 'USER': 'wiki', 'PASSWORD': 'wiki', 'HOST': '127.0.0.1', 'PORT': '3306', } } # Password validation # https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] # Internationalization # https://docs.djangoproject.com/en/2.2/topics/i18n/ LANGUAGE_CODE = 'en-us' TIME_ZONE = 'Europe/Stockholm' USE_I18N = True USE_L10N = True USE_TZ = True # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/2.2/howto/static-files/ STATIC_URL = '/static/' STATIC_ROOT = '/var/www/wiki.example.com/htdocs/static' SITE_ID = 1 LOGIN_REDIRECT_URL = reverse_lazy('wiki:get', kwargs={'path': ''}) :file:`~/mysite/roles/wiki/files/urls.py` :: from django.contrib import admin from django.urls import path, include urlpatterns = [ path('admin/', admin.site.urls), path('notifications/', include('django_nyt.urls')), path('', include('wiki.urls')) ] :file:`~/mysite/roles/wiki/files/create_superadmin.py` :: #!/usr/bin/env python import os from django import setup os.environ['DJANGO_SETTINGS_MODULE']='wiki_example_com.settings' setup() from django.conf import settings from django.contrib.auth.models import User User.objects.all() if len(User.objects.filter(username="admin")) == 0: User.objects.create_superuser('admin', 'john.doe@example.com', 'admin') print("Created superuser.") 6. Time to add the new role to our web server. :file:`~/mysite/playbooks/web.yml` :: --- - hosts: web remote_user: ansible become: yes roles: - common - ldap_client - mail_forwarder - web_server - database_server - tbg - wiki 7. Apply the changes: :: workon mysite && ansible-playbook playbooks/site.yml 8. At this point Django Wiki has been installed, and you should be able to open the URL https://wiki.example.com/ and log-in into *Django Wiki* with username ``admin`` and password ``admin``. Backups, backups, backups! -------------------------- As it is well known, everyone has backups of their important data. Right? Riiiiight? There are three Ansible roles that implement backup functionality - ``backup_server``, ``backup_client``, and ``backup``. Backup is based around the use of `Duplicity `_ and its convenience wrapper, `Duply `_. Due to this selection, it should be noted that the backup clients are the ones making connection to the backup server (not the other way around). Backups are encrypted and signed using GnuPG before being stored on the backup server. Private key used for encryption and signing is therefore stored on the client side. This key should not be encrypted in order to allow for unattended backups. Although not necessary, it is highly recommended to have separation between different backup clients and the keys used for encryption and signing. I.e. stick to a separate encryption/signing key for each backup client. It should be noted that it is also possible to specify additional *public* keys to encrypt with. This lets you have a backup decryptable with some other, "master" key. The backups are transferred to the backup server via SFTP - the ``backup_server`` role sets-up a dedicated OpenSSH server instance that limits the connecting clients to a SFTP chroot. All backups are stored within directory ``/srv/backups`` (on the backup server). Within this directory, every client server has a dedicated sub-directory, and within this sub-directory another sub-directory called ``duplicity``, where the actual *Duplicity* backups are stored. So, for example, the directory where backups for ``www.example.com`` are stored at would be ``/srv/backups/www.example.com/duplicity``. When backups are configured, they are set-up to be running every morning at 02:00. Before the backup run it is possible to run a preparation task, and a lot of roles do this in order to create database dumps etc. Setting-up the backup server ---------------------------- With the overview of backups out of the way, it is time to set-up the backup server itself first. This is a farily simple task to perform, so let's get straight to it: 1. Update the playbook for backup server to include the backup server role. :file:`~/mysite/playbooks/backup.yml` :: --- - hosts: backup remote_user: ansible become: yes roles: - common - mail_forwarder - backup_server 2. There is just one mandatory parameter for the role - OpenSSH server keys to be used for backup-dedicated instance: :file:`~/mysite/group_vars/backup.yml` :: --- backup_host_ssh_private_keys: rsa: "{{ lookup('file', inventory_dir + '/ssh/bak_rsa_key') }}" ed25519: "{{ lookup('file', inventory_dir + '/ssh/bak_ed25519_key') }}" ecdsa: "{{ lookup('file', inventory_dir + '/ssh/bak_ecdsa_key') }}" 3. Since we have decided to specify the keys above through file lookup, the above-listed keys now need to be generated:: ssh-keygen -f ~/mysite/ssh/bak_rsa_key -N '' -t rsa ssh-keygen -f ~/mysite/ssh/bak_ed25519_key -N '' -t ed25519 ssh-keygen -f ~/mysite/ssh/bak_ecdsa_key -N '' -t ecdsa Adding backup clients --------------------- Well, that was all nice and dandy, but it does look like something is missing... Ah, we haven't really configured any backup clients, right? Surprisingly enough, specifying backup clients is optional, but that won't get you far. Luckily for you, all relevant *Majic Ansible Roles* are *backup-aware*. In other words, all the roles have been implemented with support for doing back-ups - it is just that by default this functionality is disabled (since you might be relying on some other schema to back things up - LVM snapshots or what-not). All that is needed is to enable the backups for each role, and provide some extra variables required by the ``backup_client`` role. For this a set of GnuPG private keys are necessary. These need to be provided as ASCII-armoured GnuPG-exported files. For simplicity sake, this example documents use of GnuPG keyring in conjunction with Ansible's ``pipe`` lookup. So, back to the business: 1. Update the backup server configuration - each client needs to be explicitly registered: :file:`~/mysite/group_vars/backup.yml` :: backup_clients: - server: comms.example.com public_key: "{{ lookup('file', inventory_dir + '/ssh/comms.example.com.pub') }}" ip: 10.32.64.19 - server: www.example.com public_key: "{{ lookup('file', inventory_dir + '/ssh/www.example.com.pub') }}" ip: 10.32.64.20 # Ah, this is a bit interesting - we can back-up the backup server # itself! Don't worry, though, this is mainly so the logs and home # directories are preserved, so it won't take too much space ;) - server: bak.example.com public_key: "{{ lookup('file', inventory_dir + '/ssh/bak.example.com.pub') }}" ip: 127.0.0.1 2. And now to configure backup clients for all servers: .. warning:: By default Ansible's file lookup plugin will strip newlines and spaces from the end of the file. This is a problem when deploying the RSA ssh keys, since if there is no newline after the ``-----END OPENSSH PRIVATE KEY-----`` delimeter, ssh client will report error about the format of the key file being invalid. Therefore the example below explicitly disables stripping newline from the end of the file. :file:`~/mysite/group_vars/all.yml` :: enable_backup: yes backup_encryption_key: "{{ lookup('pipe', 'gpg --homedir ~/mysite/gnupg/ --armour --export-secret-keys ' + ansible_fqdn ) }}" backup_server: bak.example.com backup_server_host_ssh_public_keys: - "{{ lookup('file', inventory_dir + '/ssh/bak_rsa_key.pub') }}" - "{{ lookup('file', inventory_dir + '/ssh/bak_ed25519_key.pub') }}" - "{{ lookup('file', inventory_dir + '/ssh/bak_ecdsa_key.pub') }}" backup_ssh_key: "{{ lookup('file', inventory_dir + '/ssh/' + ansible_fqdn, rstrip=False) }}" 3. So, looking at the configuration up there, there is a couple of file lookups for getting the variable values, as well as one pipe lookup for fetching the encryption keys. For start, let's create the SSH private keys used for client log-ins to backup server:: ssh-keygen -f ~/mysite/ssh/comms.example.com -N '' ssh-keygen -f ~/mysite/ssh/www.example.com -N '' ssh-keygen -f ~/mysite/ssh/bak.example.com -N '' 4. Next off, a GnuPG keyring needs to be populated with private keys that will be used for backup encryption and signing. In total, we need three keys, one for each server. The keys should not be encrypted, and they should be named after the respective server's FQDN. For simplicity sake, here is a nice copy-pastable batch version for doing so: .. note:: Key generation requires a lot of entropy. If you are running this command on a VM, you may want to ``apt-get install haveged`` to speed this up. Please do read up on haveged if deploying to a real system, though! Don't trust it blindly! :: chmod 700 ~/mysite/gnupg pkill gpg-agent gpg --homedir ~/mysite/gnupg --batch --generate-key << EOF Key-Type:RSA Key-Length:1024 Name-Real:comms.example.com Expire-Date:0 %no-protection %commit Key-Type:RSA Key-Length:1024 Name-Real:www.example.com Expire-Date:0 %no-protection %commit Key-Type:RSA Key-Length:1024 Name-Real:bak.example.com Expire-Date:0 %no-protection %commit EOF 5. And... Apply:: workon mysite && ansible-playbook playbooks/site.yml 6. At this point the backup roles have been set-up everywhere, and the backups will be running every day at 02:00 in the morning. Of course, you may want to test out a backup yourself immediatelly by running the following command on servers:: duply main backup .. note:: For more information on available commands and how to work with backup tool, please have look at `Official Duply Pages `_. Adding backup support to custom roles ------------------------------------- As mentioned before, all of the supplied roles coming with *Majic Ansible Roles* include backup support. What gets backed-up depends on the role implementation (see role reference for details). What about backup support for custom roles? This is something that has to be done by hand. However, it is quite simple to do so. Backup integration will be demonstrated with the previously implemented ``tbg`` role. *The Bug Genie* stores most of its data in database, but thanks to the ``database`` role its backup is already handled for us. As a side-note, just before every backup run the database is dumped and stored in location ``/srv/backup/tbg.sql``. That file is subsequently backed-up via *Duply* run. What is not backed-up for us, though, are the files uploaded to *The Bug Genie*. So let's fix that one. 1. Add the ``backup`` role to list of dependencies. Take note that while the ``backup_client`` role deals with basic set-up of backup client and its configuration, the ``backup`` role is used to define what should be backed-up. It is important to define unique filename for the backup patterns file. Take into account that you can use pretty much any globbing pattern supported by Duplicity. .. warning:: Make sure the addition is properly aligned in the yaml file to previous role dependency definitions. :file:`~/mysite/roles/tbg/meta/main.yml` .. Small workaround for Sphinx not preserving leading spaces in case all lines have the same amount of leading spaces. .. code-block:: none :name: sphinx_workaround - role: backup when: enable_backup backup_patterns_filename: "tbg" backup_patterns: - "/var/www/tbg.example.com/files" 2. Apply the changes:: workon mysite && ansible-playbook playbooks/site.yml 3. Now rerun the backup on server ``www.example.com`` (as root). If you haven't uploaded any files, you may want to do so before testing to make sure something is backed-up. This will require enabling file uploads in `The Bug Genie settings `_, creating a test project, and then adding a new project release (via project's release center). While creating a new project release, it is possible to upload a release file. :: duply main backup 4. Verify that the files have been backed-up: :: duply main list .. note:: If you wanted to run a script prior to backup run, you would simply deploy a shell script with desired content to ``/etc/duply/main/pre.d/``. Just make sure the permissions for it are ok (it has to be executable by the root user). Dealing with failures --------------------- While the roles have been designed to be fairly robust, it should be taken into account that certain handlers are used to bring the system into consistent state. These handlers are mostly the ones dealing with service restarts, but there are also a couple of handlers that take care of transforming certain data into the required formats, import of files etc. This means that failure to successfully execute such handlers could result in inconsistent state on the server. Think of service configuration files being updated, yet the service itself is not restarted and therefore continues to run with the old configuration. Handler execution failure can depend on a couple of things, including the loss of SSH connectivity to managed machine, or some kind of unusual time-out during handler execution. To help handle this situation, Majic Ansible Roles all come with a special way to invoke the handlers explicitly. Each role will include handlers as tasks, provided that a special variable (``run_handlers``) is passed in to playbook run. To make the run shorter, the handlers in such a run are also tagged with ``handlers``. This doubling of environment variable + tagging stems from current limitations of Ansible (it is not possible to specify that certain task should be run only if a tag is specified, therefore an additional variable has to be used). Handlers alone can be invoked specifically with command similar to:: ansible-playbook -t handlers -e run_handlers=true playbooks/site.yml The ``run_handlers`` variable is treated as boolean, and by default it is not set. Checking for available package upgrades --------------------------------------- One of the more annoying chores when you maintain your own infrastructure is making sure everything is up-to-date. And this has to be done - both in order to ensure for problem-free experience for users (yourself included), and for making sure there are no security vulnerabilities that could be exploited by a (random) adversary. *Majic Ansible Roles* try to keep you covered on this front as well. As part of regular deployment, the ``common`` role will deploy and configure ``apticron`` - a nifty little script that runs on hourly basis and checks if any of your system-provided packages are outdated. If ``apticron`` detects an outdated package, it will output this information to standard output, which will result in the cron daemon sending out an e-mail to the local root account. These mails can be further directed towards other mail accounts via aliases (easily achieveable if you use either the ``mail_forwarder`` or ``mail_server`` roles). No packages will be upgraded automatically - ensuring you can make sure upgrades work correctly and do not cause major outage without anyone being present to fix them. Another useful package you may want to look into is ``needrestart`` - which runs as a hook during the upgrade process to detect any processes that seem to be running with outdated libraries, allowing you to restart them as well. This package is *not* installed by the ``common`` role out-of-the-box, but you can easily do so by updating the ``common_packages`` setting. In addition to system packages, the ``common`` role makes it easy to check if any of the pip requirements files are outdated as well. It should be noted, though, that this check does *not* verify the Python virtual environments themselves. This is primarily useful when you use `pip-tools `_ for maintaining the requirements files. In fact, I would encourage you to utilise ``pip-tools`` for both this purpose and for keeping the virtual environment in sync and up-to-date. Roles that want to take advantage of this would: - Create a sub-directory under ``/etc/pip_check_requirements_upgrades/`` (for Python 2 applications) or ``/etc/pip_check_requirements_upgrades-py3/`` (for Python 3 applications). - Deploy ``.in`` and ``.txt`` files within the sub-directory (see ``pip-tools`` docs for explanation of how the ``.in`` files work). - Ensure the created sub-directory and files have ownership set to ``root:pipreqcheck``. .. note:: If you are using the ``wsgi_website`` role as dependency, simply set-up the ``wsgi_requirements`` parameter, and then deploy the ``.in`` and ``.txt`` file into directory ``/etc/pip_check_requirements_upgrades/FQDN`` (this directory is automatically created when ``wsgi_requirements`` is specified). Where to go next? ----------------- Well, those were some rather lengthy usage instructions, but hopefuly they are useful. Things you might want to check-out next: * :ref:`rolereference` * :ref:`testsite` * Finally, if it tickles your interest, have a look at role implementations themselves.