How to use Puppet and Terraform to create a VPN in Digital Ocean - Infrastructure tutorial part two

This tutorial is a follow up to my previous tutorial that showed how to build a VPN server using Terraform, in this part I will expand on the ideas in this tutorial by introducing Puppet into the mix along with a multi machine Vagrant setup to allow for the testing of several machines at once.

Prerequisites

Before continuing please take a look at my Terraform VPN tutorial and my Puppet with r10k tutorial, I will assume that you have been through both of these and already have Terraform plans and a Puppet repo as described in these tutorials.

Expanding Puppet

We are going to dive right in and bring the config for our VPN server into Puppet, after this we will update our Terraform plans so instances get bootstrapped by Puppet.

First off, we need to create a VPN server profile class, there are some OpenVPN modules on Puppet Forge, but none of them quite meet our requirements. Luxflux's module wants to generate keys, we don't want that since we are generating keys away from our remote servers, the other OpenVPN modules seem to be quite limited and somewhat awkward to use. That's OK though, let's write our own custom VPN server profile class.

Create the vpn profile files:

cd ~/example.domain.com-inf/puppet
mkdir -p site/profiles/manifests/vpn
touch site/profiles/manifests/vpn/server.pp

Set the content of the server.pp file like so:

# ~/example.domain.com-inf/puppet/site/profiles/manifests/vpn/server.pp
# Sets up OpenVPN server
# Keys should be copied to server before puppet
class profiles::vpn::server (
    $client_configs
) {
    package { 'openvpn':
        ensure => present
    } ->
    file { '/etc/openvpn/keys':
        ensure => directory,
        owner  => 'root',
        group  => 'root',
        mode   => '0400',
    } ->
    file { '/etc/openvpn/client-configs':
        ensure => directory,
        owner  => 'root',
        group  => 'root',
        mode   => '0666',
    } ->
    file { "/etc/openvpn/${::fqdn}.conf":
        ensure  => present,
        owner   => 'root',
        group   => 'root',
        mode    => '0600',
        content => template('profiles/vpn/server.conf.erb'),
    } ~>
    service { 'openvpn':
        ensure    => running,
        name      => "openvpn@${::fqdn}",
        hasstatus => true,
        enable    => true,
    }

    create_resources(profiles::vpn::client_config, $client_configs)

    # Accept all via vpn
    firewall { '200 accept input via VPN':
        chain   => 'INPUT',
        iniface => 'tun+',
        action  => 'accept',
        proto   => 'all',
    }

    firewall { '201 accept input via VPN':
        chain   => 'FORWARD',
        iniface => 'tun+',
        action  => 'accept',
        proto   => 'all',
    }

    # Allow vpn clients to connect
    firewall { '203 VPN server allow client connections via 1194':
        port   => '1194',
        proto  => 'udp',
        action => 'accept',
    }
}

This profile class installs OpenVPN, sets up a few config files  and directories, ensures the VPN service is running and then adds a firewall rule to allow  OpenVPN clients to connect.

You may notice that we need to create a few things in order to complete this class.

  • The class has a parameter, $client_configs, that we need to provide.
  • We have introduced another profile class, profiles::vpn::client_config, we are using Puppet's create_resource function to create all of these in on go.
  • The OpenVPN config file is generated from a template file ('profiles/vpn/server.conf.erb').

The $client_configs parameter is a simple hash of client FQDN's with associated ips for the client_configs resource. Rather than pass this in when includng this class we are going to store this setting in common.yaml. Puppet has a built in feature called data-binding, if a parameter is omitted when including a class Puppet will check hiera for the setting using the classes namespace. This means that we can store all of our class parameters in hiera making it easy to change these settings since they are stored in a central location.

Let's add this setting into common.yaml, note that we use the vpn class's namespace so Puppet can find it:

---
profiles::vpn::server::client_configs:
  dummy.example.domain.com:
    ips:  '10.8.0.100 10.8.0.101'
  remote-machine-dreed:
    ips:  '10.8.0.104 10.8.0.105'
~

As you can see we are defining each client by it's FQDN and adding in an ips field for each one, setup as many clients as you need here and assign ips as needed.

Let's create the client_config class:

cd ~/example.domain.com-inf/puppet
touch site/profiles/manifests/vpn/client_config.pp

Set the contents like so:

# Creates client config files on OpenVPN server
define profiles::vpn::client_config (
    $ips
) {
    file { "/etc/openvpn/client-configs/${name}":
        ensure  => present,
        owner   => 'root',
        group   => 'root',
        mode    => '0666',
        content => template('profiles/vpn/client-config.erb'),
        require => [
            Package['openvpn'],
            File['/etc/openvpn/client-configs']
        ],
        notify  => Service['openvpn']
    }
}

The class takes the clients ips setting, which is passed in from the hiera hash by the call to create_resources. The file resource creates the client config based on a template, let's create a templates directory in our profiles module, along with a subdirectory for our vpn profile class before finally creating the client config template:

cd ~/example.domain.com-inf/puppet
mkdir -p site/profiles/templates/vpn
touch site/profiles/templates/vpn/client_config.erb

Set the contents like so, here we are simply outputting the value of the ips variable:

# ~/example.domain.com-inf/puppet/site/profiles/templates/vpn/client-config.erb
ifconfig-push <%= @ips %>

Finally we just need to create the server config template like so:

cd ~/example.domain.com-inf/puppet
touch site/profiles/templates/vpn/server.conf.erb

And the contents:

# ~/example.domain.com-inf/puppet/site/profiles/templates/vpn/server.conf.erb
mode server
ca keys/ca.crt
cert keys/<%= @fqdn %>.crt
key keys/<%= @fqdn %>.key
dh keys/dh2048.pem
ifconfig-pool-persist ipp.txt
client-config-dir client-configs
proto udp
port 1194
comp-lzo
group nogroup
user nobody
status status.log
dev tun0
server 10.8.0.0 255.255.0.0
keepalive 1 5
topology net30
client-to-client
persist-tun
persist-key
push "persist-key"
push "persist-tun"

That's our vpn profile class completed! Now let's finish up by creating a vpn role and a hiera yaml file for our vpn server instance:

cd ~/example.domain.com-inf/puppet
touch site/roles/manifests/vpn.pp

Contents of the vpn role:

# ~/example.domain.com-inf/puppet/site/roles/manifests/vpn.pp
# OpenVPN server role
class roles::vpn {
    include profiles::common
    include profiles::vpn::server
}

Hiera file for the VPN server:

cd ~/example.domain.com-inf/puppet
touch hiera/nodes/vpn.example.domain.com.yaml

Assign the vpn role like so:

# ~/example.domain.com-inf/puppet/hiera/nodes/vpn.example.domain.com.yaml
---
environment: production
classes:
  - roles::vpn

We are nearly done with our vpn server role! There is just one more thing to consider and that is the pre.pp firewall class that we created in the Puppet r10k tutorial (~/example.domain.com-inf/puppet/site/profiles/manifests/firewall/pre.pp), in this class we are allowing ssh access, however as stated in the Terraform OpenVPN tutorial we only want to allow ssh access over the vpn, so we need to update the pre.pp class to reflect this. But! There is one more thing to consider (isn't there always?), we are going to test our Puppet repo in Vagrant and in order to keep the vagrant ssh command working we need to allow ssh access over the public interface when running in Vagrant. To achieve this we will use a fact that will denote whether or not we are running in Vagrant in the Vagrantfile, update the pre.pp class like so:

# ~/example.domain.com-inf/puppet/site/profiles/manifests/firewall/pre.pp
# First off, basic firewall rules
class profiles::firewall::pre {
    Firewall {
      require => undef,
    }

    # Default firewall rules
    firewall { '000 accept all icmp':
        proto  => 'icmp',
        action => 'accept',
    }

    firewall { '001 accept all to lo interface':
        proto   => 'all',
        iniface => 'lo',
        action  => 'accept',
    }

    firewall { '002 reject local traffic not on loopback interface':
        iniface     => '! lo',
        proto       => 'all',
        destination => '127.0.0.1/8',
        action      => 'reject',
    }

    firewall { '003 accept related established rules':
        proto  => 'all',
        state  => ['RELATED', 'ESTABLISHED'],
        action => 'accept',
    }

    if $::vagrant == 1 {
        # Allow standard ssh if in Vagrant mode
        firewall { '004 ssh 22':
            port   => '22',
            proto  => 'tcp',
            action => 'accept',
        }
    }
}

Don't forgot about the clients

Before finishing up let's also add config for our dummy instance into our Puppet repo, this will involve adding in a vpn client profile class, we will reuse the default role that we created earlier for the dummy instance in the previous tutorial. Once we've got the new parts in place we are going to use a cool feature of Vagrant to test our puppet repo, you can use Vagrant to spin up a network of VMs, so we can test both our vpn server and dummy instances at the same time using Vagrant.

Let's create a vpn client profile class:

cd ~/example.domain.com-inf/puppet
touch site/profiles/manifests/vpn/client.pp

Set it to look like this:

# ~/example.domain.com-inf/puppet/site/profiles/manifests/vpn/client.pp
# OpenVPN client setup
# Keys should be copied to server before puppet runs
# Requires that dnsmasq be installed and running
class profiles::vpn::client (
    $remote
) {
    package { 'openvpn':
        ensure  => present,
        require => [
            Service['dnsmasq'],
        ],
    } ->
    file { '/etc/openvpn/keys':
        ensure => directory,
        owner  => 'root',
        group  => 'root',
        mode   => '0400',
    } ->
    file { "/etc/openvpn/${::fqdn}.conf":
        ensure  => present,
        owner   => 'root',
        group   => 'root',
        mode    => '0600',
        content => template('profiles/vpn/client.conf.erb'),
    } ~>
    service { 'openvpn':
        ensure    => running,
        name      => "openvpn@${::fqdn}",
        hasstatus => true,
        enable    => true,
        subscribe => [
            Service['dnsmasq'],
        ],
    }

    # Accept all via vpn
    firewall { '200 accept input via VPN':
        chain   => 'INPUT',
        iniface => 'tun+',
        action  => 'accept',
        proto   => 'all',
    }

    firewall { '201 accept input via VPN':
        chain   => 'FORWARD',
        iniface => 'tun+',
        action  => 'accept',
        proto   => 'all',
    }
}

Once again we've introduced a few new things here, most notable is a dependency on dnsmasq. We'll build a profile class for dnsmasq once we've got everything else in place for the vpn client profile so ignore that bit for now, let's get the other pieces in place.

This class has a parameter, $remote, this is the hostname for the vpn server, let's use data binding again to set this parameter in common.yaml:

# ~/example.domain.com-inf/puppet/hiera/common.yaml
---
profiles::vpn::server::client_configs:
  dummy.example.domain.com:
    ips:  '10.8.0.100 10.8.0.101'
  remote-machine-dreed:
    ips:  '10.8.0.104 10.8.0.105'
profiles::vpn::client::remote: 'vpn.example.domain.com'
~

In addition to the hiera config we also need a new template (profiles/vpn/client.conf.erb) for our vpn client's config file:

cd ~/example.domain.com-inf/puppet
touch site/profiles/templates/vpn/client.conf.erb

Set the contents like so:

# ~/example.domain.com-inf/puppet/site/profiles/templates/vpn/client.conf.erb
client
ca keys/ca.crt
cert keys/<%= @fqdn %>.crt
key keys/<%= @fqdn %>.key
dev tun
proto udp
remote <%= @remote %> 1194
comp-lzo
resolv-retry infinite
auth-retry none
nobind
persist-key
persist-tun
mute-replay-warnings
ns-cert-type server
verb 3
mute 20

That's the vpn client profile finished, one key thing to note here is that none of this class kicks off without dnsmasq, this is so a client can actually contact the vpn server before OpenVPN is kicked off, remember that in part one we employed dnsmasq to leverage Digital Ocean's nameserver's. So we need to create a profile class for dnsmasq:

cd ~/example.domain.com-inf/puppet
mkdir -p site/profiles/manifests/dns
touch site/profiles/manifests/dns/client.pp

Set the contents of client.pp:

# ~/example.domain.com-inf/puppet/site/profiles/manifests/dns/client.pp
# Configure dnsmasq
class profiles::dns::client {
    package { 'dnsmasq':
        ensure => present,
    } ->
    file { '/etc/dnsmasq.conf':
        ensure  => present,
        owner   => 'root',
        group   => 'root',
        mode    => '0644',
        content => file('profiles/dns/etc/dnsmasq.conf'),
    } ~>
    service { 'dnsmasq':
        ensure     => running,
        hasstatus  => true,
        hasrestart => true,
        enable     => true,
    }
}

Create the dnsmasq.conf file:

cd ~/example.domain.com-inf/puppet
mkdir -p site/profiles/files/dns
touch site/profiles/files/dns/dnsmasq.conf

And the contents like so (taken from the Terraform OpenVPN tutorial):

# ~/example.domain.com-inf/puppet/site/profiles/files/dns/dnsmasq.conf
listen-address=127.0.0.1
# dyn dns servers
server=216.146.35.35
server=216.146.36.36
# do dns servers for externally available do nodes
server=/.example.domain.com/173.245.58.51
server=/.example.domain.com/173.245.59.41
server=/.example.domain.com/198.41.222.173
no-resolv
bind-interfaces
conf-dir=/etc/dnsmasq.d/

That finishes off the vpn and associated dns client profile classes. We just need to add these profles to the default.pp role that we created earlier like so:

# ~/example.domain.com-inf/puppet/site/roles/manifests/default.pp
# default server role
class roles::default {
    include profiles::common
    include profiles::dns::client
    include profiles::vpn::client
}

Remember that we created a default.role.com hiera yaml file in the Puppet r10k tutorial? Well let's repurpose that yaml file and match the FQDN to that of the dummy vpn client instance that we created in the OpenVPN tutorial, rename the file, check your changes into git and update your Puppet control repo via r10k:

cd ~/example.domain.com-inf/puppet
mv hiera/nodes/default.role.com.yaml hiera/nodes/dummy.example.domain.com.yaml
# check your changes in then...
cd ~/example.domain.com-inf/puppet-ctrl
bin/r10k deploy environment -p -v

Our Puppet repo is finished for now, it's capable of provisioning a vpn server and a vpn client, next we are going to test our repo in vagrant and then update our dummy Terraform plan so it uses puppet.

Multi Vagrantaning-ing

Now let's test all of this in Vagrant just to be sure, first check your changes in and update your control repo (bin/r10k deploy environment -p -v).

We can use Vagrant to spin up multiple instances using just one Vagrantfile, this will allow us to test both the vpn server and dummy client instance in one fell swoop. Before we can test everything we need to solve one issue, that is that by default Vagrant boxes cannot see each other via their hostnam'es, luckily there is a handy plugin that will solve this issue for us so the boxes will come up and our dummy instance will be able to connect to the vpn server via it's hostname just as if we were using Digital Ocean's nameservers.

Install the vagrant-hosts plugin, this will allow boxes to see each other via their hostname's:

sudo vagrant plugin install vagrant-hosts

Create a Vagrantfile within your terraform plans folder, you should already have a symlnk to your openvpn directory from part one, we will need to create a symlink to your puppet-ctrl directory however:

cd ~/example.domain.com-inf/terraform
ln -s ~/example.domain.com-inf/puppet-ctrl/ files/puppet
touch Vagrantfile

Set the contents of the Vagrantfile like so:

# ~/example.domain.com-inf/terraform/Vagrantfile
Vagrant.configure(2) do |config|
  config.vm.box = "debian/jessie64"

  # Seems that symlinked folders have to be shared individually
  config.vm.synced_folder "files/puppet", "/root/files/puppet"
  config.vm.synced_folder "files/openvpn", "/root/files/openvpn"
  config.vm.synced_folder "files/shell", "/root/files/shell"

  config.vm.provision "shell",
    inline: "
      export VAGRANT=1;
      /bin/bash /root/files/shell/bootstrap.sh
    "

  config.vm.define "vpn" do |vpn|
    vpn.vm.hostname = "vpn.example.domain.com"
    vpn.vm.network :private_network, ip: "192.168.5.10"
    vpn.vm.network "forwarded_port", guest: 1194, host: 1194
    vpn.vm.provision :hosts
  end

  config.vm.define "dummy" do |dummy|
    dummy.vm.hostname = "dummy.example.domain.com"
    dummy.vm.network :private_network, ip: "192.168.5.20"
    dummy.vm.provision :hosts
  end
end

The bootstrap.sh file is a script that will be used by both our Terraform plans and Vagrant boxes for provisioning, this is so our Vagrant process mimics the Terraform process more closely, go ahead and create the file:

cd ~/example.domain.com-inf/terraform
mkdir -p files/shell

And the contents of the file:

# ~/example.domain.com-inf/terraform/files/shell/bootstrap.sh
#!/bin/bash

FQDN=`hostname --fqdn`

# Vagrant fact for Puppet if needed
if [ "$VAGRANT" == '1' ]; then
    mkdir -p /etc/facter/facts.d
    echo -e '---\nvagrant:  1' > /etc/facter/facts.d/vagrant.yaml
fi

apt-get update
apt-get -y upgrade

# Setup OpenVPN keys
mkdir -p /etc/openvpn/keys
cp /root/files/openvpn/key-store/ca.crt /etc/openvpn/keys/
cp /root/files/openvpn/key-store/$FQDN.crt /etc/openvpn/keys/
cp /root/files/openvpn/key-store/$FQDN.key /etc/openvpn/keys/

# Only include diffie hellman on VPN server
if [ $(hostname --fqdn | cut -f1 -d.) == 'vpn' ]; then
    cp /root/files/openvpn/key-store/dh2048.pem /etc/openvpn/keys/
fi

# Bootstrap puppet
apt-get --force-yes -y install puppet
cp -R /root/files/puppet/* /etc/puppet/
puppet apply /etc/puppet/environments/production/manifests/site.pp --confdir=/etc/puppet/ --environment=production --environmentpath=/etc/puppet/environments/

if [ "$VAGRANT" != '1' ]; then
    rm -rf /root/files
fi

Back to the Vagrantfile, in it we describe two machines, vpn and dummy, both with appropriate hostnames so they will be assigned classes by our puppet repo. We have to assign static ips to the boxes to make the vagrant hosts plugin work, feel free to change the ones above if they conflict with anything on your system. The 'provision :hosts' line is the part that actually does the hostname magic. Each box inherits any external config, so both machines include the shared folder and puppet provisioner config from the top of the file. One more thing of note is that we have have to expose port 1194 on the vpn server machine, otherwise the dummy machine wouldn't be able to connect via OpenVPN.

When you have a multi machine Vagrantfile file most commands (such as vagrant up and vagrant destroy) will apply to all machines in the Vagrantfile, you can target a command at a particular machine by appending the machines name (eg vagrant up vpn). Some commands, such as vagrant ssh require a machine name when used with a multi machine Vagrantfile.

Bring up both machines via vagrant up, they should hopefully come up without error. If you ssh into the dummy machine (vagrant ssh dummy) you should see that it is indeed connected to the vpn, if you ssh into the vpn box and check the OpenVPN status file (/etc/openvpn/status.log) you should see that it lists the dummy instance as connected.

Once you've finished playing with the two machines you an run the destroy command like so to destroy both boxes without getting prompted:

vagrant destroy -f

Terraform it

OK so we have our Puppet repo tested and ready to go, let's add our repo into our Terraform plan for the VPN server and dummy instance so they use the bootstrap script that we just created.

Update your vpn instance's plan like so:

# ~/example.domain.com-inf/terraform/vpn.example.domain.com.tf
resource "digitalocean_droplet" "vpn-example-domain-com" {
  image = "debian-8-x64"
  name = "vpn.example.domain.com"
  region = "ams3"
  size = "512mb"
  private_networking = false
  ipv6 = false
  ssh_keys = [
    "${var.ssh_fingerprint}"
  ]

  connection {
      user = "root"
      type = "ssh"
      key_file = "${var.pvt_key}"
      timeout = "2m"
  }

  provisioner "file" {
    source = "files"
    destination = "/root"
  }

  # Bootstrap
  provisioner "remote-exec" {
    inline = [
      "export PATH=$PATH:/usr/bin",
      "/bin/bash /root/files/shell/bootstrap.sh",
      "rm -rf /root/files"
    ]
  }
}

resource "digitalocean_domain" "default" {
   name = "vpn.example.domain.com"
   ip_address = "${digitalocean_droplet.vpn-example-domain-com.ipv4_address}"
}

Here's the updated dummy instance plan:

# ~/example.domain.com-inf/terraform/dummy.example.domain.com.tf
resource "digitalocean_droplet" "dummy-example-domain-com" {
  depends_on = ["digitalocean_droplet.vpn-example-domain-com"]
  image = "debian-8-x64"
  name = "dummy.example.domain.com"
  region = "ams3"
  size = "512mb"
  private_networking = false
  ipv6 = false
  ssh_keys = [
    "${var.ssh_fingerprint}"
  ]

  connection {
      user = "root"
      type = "ssh"
      key_file = "${var.pvt_key}"
      timeout = "2m"
  }

  provisioner "file" {
    source = "files"
    destination = "/root"
  }

  # Bootstrap
  provisioner "remote-exec" {
    inline = [
      "export PATH=$PATH:/usr/bin",
      "/bin/bash /root/files/shell/bootstrap.sh",
      "rm -rf /root/files"
    ]
  }
}

Go ahead and bring up both instances, here are the Terraform commands in case you've forgotten:

terraform destroy -var "do_token=[DO TOKEN]" -var "pub_key=$HOME/.ssh/[KEY NAME].pub" -var "pvt_key=$HOME/.ssh/[KEY NAME].pem" -var "ssh_fingerprint=[SSH FINGERPRINT]"
terraform apply -var "do_token=[DO TOKEN]" -var "pub_key=$HOME/.ssh/[KEY NAME].pub" -var "pvt_key=$HOME/.ssh/[KEY NAME].pem" -var "ssh_fingerprint=[SSH FINGERPRINT]"

Update your local OpenVPN config and connect to the VPN, you should be able to SSH into both machines and check that everything works as expected.

That's it, we are done! You can see my example Puppet repo here and my example Terraform config here.