Git Product home page Git Product logo

puppet-keepalived's Introduction

keepalived

License Build Status Puppet Forge Puppet Forge - downloads Puppet Forge - endorsement Puppet Forge - scores

Table of Contents

  1. Description
  2. Usage - Configuration options and additional functionality
  3. Limitations - OS compatibility, etc.
  4. Development - Guide for contributing to the module

Description

This puppet module manages keepalived. The main goal of keepalived is to provide simple and robust facilities for loadbalancing and high-availability to Linux system and Linux based infrastructures.

Usage

Basic IP-based VRRP failover

This configuration will fail-over when:

  1. Master node is unavailable
node /node01/ {
  include keepalived

  keepalived::vrrp::instance { 'VI_50':
    interface         => 'eth1',
    state             => 'MASTER',
    virtual_router_id => '50',
    priority          => '101',
    auth_type         => 'PASS',
    auth_pass         => 'secret',
    virtual_ipaddress => [ '10.0.0.1/29' ],
    track_interface   => ['eth1','tun0'], # optional, monitor these interfaces.
  }
}

node /node02/ {
  include keepalived

  keepalived::vrrp::instance { 'VI_50':
    interface         => 'eth1',
    state             => 'BACKUP',
    virtual_router_id => '50',
    priority          => '100',
    auth_type         => 'PASS',
    auth_pass         => 'secret',
    virtual_ipaddress => [ '10.0.0.1/29' ],
    track_interface   => ['eth1','tun0'], # optional, monitor these interfaces.
  }
}

or hiera:

---
keepalived::vrrp_instance:
  VI_50:
    interface: 'eth1'
    state: 'MASTER'
    virtual_router_id: 50
    priority: 101
    auth_type: 'PASS'
    auth_pass: 'secret'
    virtual_ipaddress: '10.0.0.1/29'
    track_interface:
      - 'eth1'
      - 'tun0'

Add floating routes

node /node01/ {
  include keepalived

  keepalived::vrrp::instance { 'VI_50':
    interface         => 'eth1',
    state             => 'MASTER',
    virtual_router_id => '50',
    priority          => '101',
    auth_type         => 'PASS',
    auth_pass         => 'secret',
    virtual_ipaddress => [ '10.0.0.1/29' ],
    virtual_routes    => [ { to   => '168.168.2.0/24', via => '10.0.0.2' },
                           { to   => '168.168.3.0/24', via => '10.0.0.3' } ],
    virtual_rules     => [ { from => '168.168.2.42', lookup => 'customroute' } ]
  }
}

hiera:

---
keepalived::vrrp_instance:
  VI_50:
    interface: 'eth1'
    state: 'MASTER'
    virtual_router_id: 50
    priority: 101
    auth_type: 'PASS'
    auth_pass: 'secret'
    virtual_ipaddress: '10.0.0.1/29'
    virtual_routes:
      - to: '168.168.2.0/24'
        via: '10.0.0.2'
      - to: 168.168.3.0/24'
        via: '10.0.0.3'
    virtual_rules:
      - from: '168.168.2.42'
        lookup: 'customroute'

Detect application level failure

This configuration will fail-over when:

  1. NGinX daemon is not running
  2. Master node is unavailable
node /node01/ {
  include keepalived

  keepalived::vrrp::script { 'check_nginx':
    script => '/usr/bin/killall -0 nginx',
  }

  keepalived::vrrp::instance { 'VI_50':
    interface         => 'eth1',
    state             => 'MASTER',
    virtual_router_id => '50',
    priority          => '101',
    auth_type         => 'PASS',
    auth_pass         => 'secret',
    virtual_ipaddress => '10.0.0.1/29',
    track_script      => 'check_nginx',
  }
}

node /node02/ {
  include keepalived

  keepalived::vrrp::script { 'check_nginx':
    script => '/usr/bin/killall -0 nginx',
  }

  keepalived::vrrp::instance { 'VI_50':
    interface         => 'eth1',
    state             => 'BACKUP',
    virtual_router_id => '50',
    priority          => '100',
    auth_type         => 'PASS',
    auth_pass         => 'secret',
    virtual_ipaddress => '10.0.0.1/29',
    track_script      => 'check_nginx',
  }
}

or hiera:

---
keepalived::vrrp_script:
  check_nginx:
    script: '/usr/bin/killall -0 nginx'

keepalived::vrrp_instance:
  VI_50:
    interface: 'eth1'
    state: 'MASTER'
    virtual_router_id: 50
    priority: 101
    auth_type: 'PASS'
    auth_pass: 'secret'
    virtual_ipaddress: '10.0.0.1/29'
    track_script: check_nginx

or using process tracking (keepalived 2.0.11+):

node /node01/ {
  include keepalived

  keepalived::vrrp::track_process { 'check_nginx':
    proc_name => 'nginx',
    weight    => 10,
    quorum    => 2,
    delay     => 10,
  }

  keepalived::vrrp::instance { 'VI_50':
    interface         => 'eth1',
    state             => 'MASTER',
    virtual_router_id => '50',
    priority          => '101',
    auth_type         => 'PASS',
    auth_pass         => 'secret',
    virtual_ipaddress => '10.0.0.1/29',
    track_process     => 'check_nginx',
  }
}

IPv4 and IPv6 virtual IP, with application level failure detection

This configuration will fail-over both the IPv4 address and the IPv6 address when:

  1. NGINX daemon is not running
  2. Master node is unavailable

It is not possible to configure both IPv4 and IPv6 addresses as virtual_ipaddresses in a single vrrp_instance; the reason is that the VRRP protocol doesn't support it. The two VRRP instances can both use the same virtual_router_id since VRRP IPv4 and IPv6 instances are completely independent of each other. Both nodes have state set to BACKUP, which will prevent them from entering MASTER state until the check script(s) have succeeded and the election has been held.

To ensure that the IPv4 and IPv6 vrrp_instances are always in the same state as each other, configure a vrrp_sync_group to include both the instances. The vrrp_sync_group require the global_tracking flag to be enabled to prevent keepalived from ignoring the tracking scripts for the vrrp_sync_group's vrrp_instance members.

Configure the vrrp_instance with the native_ipv6 flag to force the instance to use IPv6. An IPv6 vrrp_instance without the "native_ipv6" keyword does not configure the virtual IPv6 address with the "deprecated nodad" options.

RFC 3484, "Default Address Selection for Internet Protocol version 6 (IPv6)": Configure a /128 mask for the IPv6 address for keepliaved to set preferred_lft to 0 to avoid the VI to be used for outgoing connections.

RFC5798 section 5.2.9 requires that if the protocol is IPv6, then the first address must be the link local address of the virtual router.

IPv6 VRRP uses VRRP version 3, which does not support authentication, so the auth_type and auth_pass parameters are removed for the IPv6 VRRP instance.

node /node0x/ {
  keepalived::vrrp::script { 'check_nginx':
    script => '/usr/bin/pkill -0 nginx',
  }

  keepalived::vrrp::sync_group { 'VI_50':
    group               => [ 'VI_50_IPV4', 'VI_50_IPV6' ],
    global_tracking     => true,
  }

  keepalived::vrrp::instance { 'VI_50_IPV4':
    interface           => 'eth0',
    state               => 'BACKUP',
    virtual_router_id   => 50,
    priority            => 100,
    auth_type           => 'PASS',
    auth_pass           => 'secret',
    virtual_ipaddress   => '10.0.0.1/32',
    track_script        => 'check_nginx',
  }

  keepalived::vrrp::instance { 'VI_50_IPV6':
    interface           => 'eth0',
    state               => 'BACKUP',
    virtual_router_id   => 50,
    priority            => 100,
    virtual_ipaddress   => ['fe80::50/128', '2001:db8::50/128', ],
    track_script        => 'check_nginx',
    native_ipv6         => true,
  }
}

Global definitions

class { 'keepalived::global_defs':
  notification_email      => '[email protected]',
  notification_email_from => '[email protected]',
  smtp_server             => 'localhost',
  smtp_connect_timeout    => '60',
  router_id               => 'your_router_instance_id',
  bfd_rlimit_rttime       => 10000,
  checker_rlimit_rttime   => 10000,
  vrrp_rlimit_rttime      => 10000,
  bfd_priority            => -20,
  checker_priority        => -20,
  vrrp_priority           => -20,
  bfd_rt_priority         => 50,
  checker_rt_priority     => 50,
  vrrp_rt_priority        => 50,
  bfd_no_swap             => true,
  checker_no_swap         => true,
  vrrp_no_swap            => true,
  vrrp_version            => 3,
  max_auto_priority       => 99,
  vrrp_notify_fifo        => '/run/keepalived.fifo',
  vrrp_notify_fifo_script => 'your_fifo_script_path',
}

Soft-restart the Keepalived daemon

class { 'keepalived':
  service_restart => 'service keepalived reload',     # When using SysV Init
  # service_restart => 'systemctl reload keepalived', # When using SystemD
}

Opt out of having the service managed by the module

class { 'keepalived':
  service_manage => false,
}

Opt out of having the package managed by the module

class { 'keepalived':
  manage_package => false,
}

Opt out include unmanaged keepalived config files

If you need to include a Keepalived config fragment managed by another tool, include_external_conf_files takes an array of config path.

Caution: config file must be readable by Keepalived daemon

class { 'keepalived':
  include_external_conf_files => ['/etc/keepalived/unmanaged-config.cfg']
}

Unicast instead of Multicast

Caution: unicast support has only been added to Keepalived since version 1.2.8

By default Keepalived will use multicast packets to determine failover conditions. However, in many cloud environments it is not possible to use multicast because of network restrictions. Keepalived can be configured to use unicast in such environments:

Enable automatic unicast configuration with exported resources by setting parameter 'collect_unicast_peers => true'

Automatic unicast configuration:

  keepalived::vrrp::instance { 'VI_50':
    interface         => 'eth1',
    state             => 'BACKUP',
    virtual_router_id => '50',
    priority          => '100',
    auth_type         => 'PASS',
    auth_pass         => 'secret',
    virtual_ipaddress => '10.0.0.1/29',
    track_script      => 'check_nginx',
    collect_unicast_peers => true,
  }

Manual unicast configuration or override auto default IP:

  keepalived::vrrp::instance { 'VI_50':
    interface         => 'eth1',
    state             => 'BACKUP',
    virtual_router_id => '50',
    priority          => '100',
    auth_type         => 'PASS',
    auth_pass         => 'secret',
    virtual_ipaddress => '10.0.0.1/29',
    track_script      => 'check_nginx',
    unicast_source_ip => $::ipaddress_eth1,
    unicast_peers     => ['10.0.0.1', '10.0.0.2']
  }

The 'unicast_source_ip' parameter is optional as Keepalived will bind to the specified interface by default. This value will be exported in place of the default when 'collect_unicast_peers => true'. The 'unicast_peers' parameter contains an array of ip addresses that correspond to the failover nodes.

Creating ip-based virtual server instances with two real servers

This sets up a virtual server www.example.com that directs traffic to example1.example.com and example2.example.com by matching on an IP address and port.

keepalived::lvs::virtual_server { 'www.example.com':
  ip_address          => '1.2.3.4',
  port                => '80',
  delay_loop          => '7',
  lb_algo             => 'wlc',
  lb_kind             => 'DR',
  persistence_timeout => 86400,
  virtualhost         => 'www.example.com',
  protocol            => 'TCP'
}

keepalived::lvs::real_server { 'example1.example.com':
  virtual_server => 'www.example.com',
  ip_address     => '1.2.3.8',
  port           => '80',
  options        => {
    weight      => '1000',
    'TCP_CHECK' => {
       connection_timeout => '3',
    }
  }
}

keepalived::lvs::real_server { 'example2.example.com':
  virtual_server => 'www.example.com',
  ip_address     => '1.2.3.9',
  port           => '80',
  options        => {
    weight      => '1000',
    'TCP_CHECK' => {
       connection_timeout => '3',
    }
  }
}

or hiera:

---
keepalived::lvs_virtual_server:
  www.example.com:
    ip_address: '1.2.3.4'
    port: 80
    delay_loop: 7
    lb_algo: 'wlc'
    lb_kind: 'DR'
    persistence_timeout: 86400
    virtualhost: 'www.example.com'
    protocol: 'TCP'

keepalived::lvs_real_server:
  example1.example.com:
    virtual_server: 'www.example.com'
    ip_address: '1.2.3.8'
    port: 80
    options:
      weight: '1000'
      TCP_CHECK:
        connect_timeout: 3
  example2.example.com:
    virtual_server: 'www.example.com'
    ip_address: '1.2.3.9'
    port: 80
    options:
      weight: '1000'
      TCP_CHECK:
        connect_timeout: 3

Creating firewall mark based virtual server instances with two real servers

This sets up a virtual server www.example.com that directs traffic to example1.example.com and example2.example.com by matching on a firewall mark set in iptables or something similar.

keepalived::lvs::virtual_server { 'www.example.com':
  fwmark              => '123',
  delay_loop          => '7',
  lb_algo             => 'wlc',
  lb_kind             => 'DR',
  persistence_timeout => 86400,
  virtualhost         => 'www.example.com',
  protocol            => 'TCP'
}

keepalived::lvs::real_server { 'example1.example.com':
  virtual_server => 'www.example.com',
  ip_address     => '1.2.3.8',
  port           => '80',
  options        => {
    weight      => '1000',
    'TCP_CHECK' => {
       connection_timeout => '3',
    }
  }
}

keepalived::lvs::real_server { 'example2.example.com':
  virtual_server => 'www.example.com',
  ip_address     => '1.2.3.9',
  port           => '80',
  options        => {
    weight      => '1000',
    'TCP_CHECK' => {
       connection_timeout => '3',
    }
  }
}

Reference

Reference documentation coming soon.

Limitations

Details in metadata.json.

Development

The contributing guide is in CONTRIBUTING.md.

Release Notes/Contributors/Etc.

Details in CHANGELOG.md.

Migrated from https://github.com/arioch/puppet-keepalived to Vox Pupuli.

puppet-keepalived's People

Contributors

aagor avatar alexjfisher avatar arioch avatar bastelfreak avatar chrislaskey avatar costela avatar daaang avatar dan33l avatar dcarley avatar dhoppe avatar duritong avatar ekohl avatar foosinn avatar frank-f avatar imp- avatar jontow avatar kenyon avatar mrfreezeex avatar petems avatar quixoten avatar root-expert avatar saimonn avatar saz avatar sigbjorntux avatar smortex avatar towo avatar trefzer avatar xavier-calland avatar ymartin-ovh avatar zilchms avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

puppet-keepalived's Issues

Needed to add vmac support

Hi, I needed to add vmac support so I made the following patch I hope you can integrate this for the future:

diff -ru /etc/puppet/environments/development/modules/keepalived/manifests/vrrp/instance.pp arioch-keepalived-1.2.3/manifests/vrrp/instance.pp
--- /etc/puppet/environments/development/modules/keepalived/manifests/vrrp/instance.pp  2016-02-12 18:17:03.405480812 +0000
+++ arioch-keepalived-1.2.3/manifests/vrrp/instance.pp  2015-11-06 08:42:18.000000000 +0000
@@ -164,8 +164,6 @@
   $unicast_source_ip          = undef,
   $unicast_peers              = undef,
   $dont_track_primary         = false,
-  $use_vmac                   = false,
-  $vmac_xmit_base             = false,

 ) {
   $_name = regsubst($name, '[:\/\n]', '')
diff -ru /etc/puppet/environments/development/modules/keepalived/templates/vrrp_instance.erb arioch-keepalived-1.2.3/templates/vrrp_instance.erb
--- /etc/puppet/environments/development/modules/keepalived/templates/vrrp_instance.erb 2016-02-12 18:18:48.851261492 +0000
+++ arioch-keepalived-1.2.3/templates/vrrp_instance.erb 2015-08-04 14:26:22.000000000 +0000
@@ -5,15 +5,6 @@
   priority                  <%= @priority %>
   advert_int                <%= @advert_int %>
   garp_master_delay         <%= @garp_master_delay %>
-
-  <%- if @use_vmac -%>
-  use_vmac
-  <%- end -%>
-
-  <%- if @vmac_xmit_base -%>
-  vmac_xmit_base
-  <%- end -%>
-
   <%- if @lvs_interface -%>
   lvs_sync_daemon_interface <%= @lvs_interface %>
   <%- end -%>

Cannot add 2 ip's, each on different interfaces

Hi,

I cannot seem to add 2 virtual ip addresses on different NIC's. If i add a 2nd vrrp instance it rejects the hiera.

Required results:

virtual_ipaddress {
    10.205.24.131 dev eth0
    10.205.25.130 dev eth1
}

Is this possible?

Thanks,
Simon

keepalived_version fact not working

Hi, I'm having issues with every agent run warning me on

Could not retrieve fact='keepalived_version', resolution='<anonymous>': undefined method `[]' for nil:NilClass

Can someone please help :)

added http_get check

I did a very simple mod to your module to add http_get health check. Not a complete implementation, but enough for what I needed at this time.

virtual_server.pp: added "$http_get = undef," after tcp_check.
lvs_virtual_server.erb: after TCP_CHECK section added:

  <%- if @http_get -%>
HTTP_GET  {
  url {
       path <%= http_get['url'] %>
       status_code <%= http_get['status_code'] %>   
      }
}
  <%- end -%>

Problem with keepalived::vrrp::track_process fullcommand

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: all
  • Ruby: all
  • Distribution: all
  • Module version: latest

How to reproduce (e.g Puppet code you use)

use a track_process resource with fullcommand => true:
keepalived::vrrp::track_process {'process':
proc_name => 'procname,
fullcommand => true,
}

What are you seeing

keepalived return an error because no fullcommand parameters exist.

What behaviour did you expect instead

keepalived working correctly

Any additional information you'd like to impart

the vrrp_track_process configuration is full_command and not fullcommand.

Not able to set sh-port and sh-fallback flags in virtual_server

There is a need to be able to set sh-port and sh-fallback flags for the SH lvs_scheduler (lb_algo sh in this module's case). The flags are set the same way as e.g alpha and omega flags and should be a simple addition.

Perhaps even better would be a more generic "flags" containing a user defined list of flags to set.

Variabilisation of templates

Hi,

I would create my own template for your module, but i should change in your code.
I propose variabilize "template" for set up them in Hiera file

Thanks

LVS real server options - upper case not allowed by puppet but needed by keepalvied

When setting up a real server settings in the keepalived.conf file is which check to use for the server. All of these checks seem to be required to be upper case. Ie, HTTP_GET works but http_get does not work.

Due to the way options are passed in the module the template takes the options hash and translates that directly into keepalived.conf config.

It turns out that puppet does not want you to start variable names with a capital letter. ( https://docs.puppetlabs.com/puppet/3.8/reference/deprecated_language.html#variable-names-beginning-with-capital-letters ).

This works:
options => { weight => 1, http_get => { ....

This does not work
options => { weight => 1, HTTP_GET => { ....
The puppet parser says: Error: Could not parse for environment production: Syntax error at 'HTTP_GET'; expected '}' at...

I'm pondering ways to solve this and will send a PR if I figure it out. I wanted to raise the issue in case someone has already solved it or has a bright idea of how to do so.

The best ideas I have so far is to make the check be specially called out and not part of the options template or to have some sort of postfix to the variable name that says to upper case is when turned into config.

Hiera can't feed data to the class directly, only via parameters

Your module is expecting data in the "name" variable of the class. This causes issues for those of us trying to feed data to your module with Hiera. Unless I'm missing something, that is. :)

Can you re-work thing so that there is a way to provide the data you use for ${name}?

"Symlinks in modules are unsupported" failure installing 1.1.1 on puppet forge

Hello,

puppet module install arioch-keepalived is complaining about a symlink. Downloading the tarball does indeed have spec/fixtures/modules/keepalived pointing to /Users/tom/tmp/puppet-keepalived.

Any chance you can re-release to puppet forge? Version 1.1.0 installs fine and the 1.1.1 github release looks perfect too.

Full error and puppet version:

# puppet agent --version
3.7.4

# puppet module install arioch-keepalived
Warning: Setting templatedir is deprecated. See http://links.puppetlabs.com/env-settings-deprecations
   (at /usr/lib/ruby/vendor_ruby/puppet/settings.rb:1139:in `issue_deprecation_warning')
Notice: Preparing to install into /etc/puppet/modules ...
Notice: Downloading from https://forgeapi.puppetlabs.com ...
Warning: Symlinks in modules are unsupported. Please investigate symlink arioch-keepalived-1.1.1/spec/fixtures/modules/keepalived->/Users/tom/tmp/puppet-keepalived.
Notice: Installing -- do not interrupt ...
Error: No such file or directory - /etc/puppet/modules/keepalived/spec/fixtures/modules/keepalived
Error: Try 'puppet help module install' for usage

Many thanks

vrrp_isntances configs

advert_int
garp_master_delay

With this 2 configs, can we make them optional not to appear int he keepalived.conf?

also with the vip config:

  virtual_ipaddress {
    23.24.25.26  dev eth1
  }

Can we make the dev and eth1 optional too from the templates?

please advise

Beaker tests for multiple nodes acceptance testing

Hi @arioch!

I remember you asking about how to write beaker specs for multiple nodes at the Puppet Contributor Summit, and I just figured out an example of how it could work:

Node set

HOSTS:
  master:
    roles:
      - default
      - master
    platform: el-6-x86_64
    box : centos-64-x64-vbox4210-nocm
    box_url : http://puppet-vagrant-boxes.puppetlabs.com/centos-64-x64-vbox4210-nocm.box
    hypervisor : vagrant
  backup:
    roles:
      - default
      - backup
    platform: el-6-x86_64
    box : centos-64-x64-vbox4210-nocm
    box_url : http://puppet-vagrant-boxes.puppetlabs.com/centos-64-x64-vbox4210-nocm.box
    hypervisor : vagrant
CONFIG:
 type: foss
require 'spec_helper_acceptance'

if hosts.length > 1
  describe "configuring multi-node keepalived" do
    let(:ipaddresses) do
      hosts_as('backup').inject({}) do |memo,host|
        memo[host] = fact_on host, "ipaddress_eth1"
        memo
      end
    end

    hosts_as('backup').each do |host|
      it "should be able to configure a host as backup on #{host}" do
        pp = <<-EOS
          # Puppet code for BACKUP state here
        EOS
        apply_manifest_on(host, pp, :catch_failures => true)
      end
    end

    hosts_as('master').each do |host|
      it "should be able to configure a host as backup on #{host}" do
        pp = <<-EOS
          # Puppet code for MASTER state here
        EOS
        apply_manifest_on(host, pp, :catch_failures => true)
      end
    end

    # Some sort of text to destroy host here and check that IP goes back to master?

  end
end

Puppet agent 3.7.3 error

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: This Array Expression is not productive. A Host Class Definition can not end with a non productive construct at /etc/puppet/modules_thirdparty/keepalived/manifests/init.pp:35:9 on node XX.

Pull request #57 seems to fix this.

misplaced curly brace

Hi,
(sorry for the long post...) I am using the following version:

# puppet --version
3.8.1
# puppet module list
/etc/puppet/modules
├── arioch-keepalived (v1.2.0)
├── puppetlabs-apt (v1.8.0)
├── puppetlabs-concat (v1.2.3)
├── puppetlabs-stdlib (v4.6.0)

I am building keepalived::lvs::real_server for keepalived::lvs::virtual_server with this puppet script:

define keepalived_tools::build_virtual_server ($lvs_state, $vrid, $vip, $vservers, $notify_script=undef, $track_script=undef) {
  if $lvs_state == 'MASTER' {
    $prio = 101
  }
  else {
    $prio = 100
  }

  keepalived::vrrp::instance { $name:
    interface         => 'eth1',
    state             => $lvs_state,
    virtual_router_id => $vrid,
    priority          => $prio,
    auth_type         => 'PASS',
    auth_pass         => 'secret',
    virtual_ipaddress => $vip,
    notify_script     => $notify_script,
    track_script      => $track_script,
    #track_interface   => ['eth1','tun0'], # optional, monitor these interfaces.
  }

  each($vservers) |$vserver| {

    keepalived::lvs::virtual_server { $vserver['name']:
      ip_address => $vip,
      port => $vserver['port'],
      lb_algo => 'wlc',
      lb_kind => 'DR',
      persistence_timeout => '300',
      real_servers => $rss,
      collect_exported => false,
      delay_loop => '10',
    }

    if ($vserver['misc_check'] == undef) {
      each($vserver['rservers']) |$r_ip| {
        keepalived::lvs::real_server { "${vserver['name']}_${r_ip}_${vserver['port']}":
          ip_address => $r_ip,
          virtual_server => $vserver['name'],
          port => $vserver['port'],
          options => {
            'TCP_CHECK' => {
              connect_timeout => 5,
              connect_port => $vserver['port'],
            },
            #inhibit_on_failure => true,
          }
        }
      }
    } else {   
      each($vserver['rservers']) |$r_ip| {
        keepalived::lvs::real_server { "${vserver['name']}_${r_ip}_${vserver['port']}":
          ip_address => $r_ip,
          virtual_server => $vserver['name'],
          port => $vserver['port'],
          options => {
            'MISC_CHECK' => {
              connect_timeout => 5,
              misc_path => $vserver['misc_check'],
              warmup => 10
            },
            #            inhibit_on_failure => true,
            weight => 2,
          }
        }
      }
    }
  }

}

This script is called with the following parameters:

build_virtual_server ('geoserver':
  lvs_state => 'MASTER',
  vrid => 50, 
  vip => '192.168.5.111',
  vservers => [{name => "LVS_geoserver_80", port => "80", rservers => [192.168.5.116, 192.168.5.118]},
                      {name => "LVS_geoserver_22", port => "22", rservers => [192.168.5.116, 192.168.5.118]},
                      {name => "LVS_geoserver_8080", port => "8080", rservers => [192.168.5.116, 192.168.5.118]}]
)

The generated keepalived.conf has a misplaced curly brace between the group definition for port 80 and port 8080 (line 103 instead of line 84):

vrrp_instance VI_50 {
  interface                 eth1
  state                     MASTER
  virtual_router_id         50
  priority                  101
  advert_int                1
  garp_master_delay         5



  # notify scripts and alerts are optional                                                                                             
  #                                                                                                                                    
  # filenames of scripts to run on transitions                                                                                         
  # can be unquoted (if just filename)                                                                                                 
  # or quoted (if has parameters)                                                                                                      




  authentication {
    auth_type PASS
    auth_pass secret
  }


  virtual_ipaddress {
    192.168.5.111 dev eth1
  }





}
group LVS_geoserver_22 {

  virtual_server 192.168.5.111 22

  delay_loop 10
  lb_algo wlc
  lb_kind DR

  persistence_timeout 300
  protocol TCP


  real_server 192.168.5.116 22 {
    TCP_CHECK {
      connect_port 22
      connect_timeout 5
    }
  }
  real_server 192.168.5.118 22 {
    TCP_CHECK {
      connect_port 22
      connect_timeout 5
    }
  }
}
group LVS_geoserver_80 {

  virtual_server 192.168.5.111 80

  delay_loop 10
  lb_algo wlc
  lb_kind DR

  persistence_timeout 300
  protocol TCP


  real_server 192.168.5.116 80 {
    TCP_CHECK {
      connect_port 80
      connect_timeout 5
    }
  }
  real_server 192.168.5.118 80 {
    TCP_CHECK {
      connect_port 80
      connect_timeout 5
    }
  }
group LVS_geoserver_8080 {

  virtual_server 192.168.5.111 8080

  delay_loop 10
  lb_algo wlc
  lb_kind DR

  persistence_timeout 300
  protocol TCP


  real_server 192.168.5.116 8080 {
    TCP_CHECK {
      connect_port 8080
      connect_timeout 5
    }
  }
}
}

The keepalived.conf is just an extract of the real one as I call my script many times and the only issue I have got is with this specific call.

Am I doing something wrong ?

As a wrap-around I change the line 154 of virtual_server.pp from order => "250-${name}" to order => "250-${name}-zzzzzzzzzzzzzzzzzzzzz" to be sure this } will be added at end of the group.

Thanks,
Rot.

Global_defs.pp depends on a not found Exec

Hi,
I have a problem using global_defs. I receive message
err: Failed to apply catalog: Could not find dependent Exec[concat_/keepalived.conf] for File[/var/lib/puppetSystem/concat/_keepalived.conf/fragments/010_keepalived.conf_globaldefs] at /var/lib/puppet/default/modules/concat/manifests/fragment.pp:122

Configuration comes from an external node classifier. Parts regarding my keepalived service are (generic resources are dynamically created):

classes:
..keepalived: {service_manage: false}
..keepalived::global_defs:
....notification_email: [[email protected], [email protected]]`
....smtp_connect_timeout: '18'
....smtp_server: localhost
..generic:
....resources:
......keepalived::vrrp::instance:
........VIP_HAPROXY: {interface: bond0.778, priority: '2', state: MASTER, virtual_ipaddress: 192.168.78.60, virtual_router_id: '78'}
......keepalived::vrrp::sync_group:
........VIP_SVC_NUAGE:
..........ensure: present
..........group: [VIP_HAPROXY]

Changing in global_defs.pp
target => "${keepalived::config_dir}/keepalived.conf",
to
target => "/etc/keepalived/keepalived.conf",
fixed my problem.

Do I miss something when calling global_defs ? Is it because we dynamically create resources ?
Philippe

Real servers as exported resources not working

Realization of exported real servers must be

Keepalived::Lvs::Real_server <<| virtual_server == $name |>>

instead of

Keepalived::Lvs::Virtual_server <<| virtual_server == $name |>>

in manifests/lvs/virtual_server.pp

I'll send a PR for this as soon as my other pull requests are merged.

Fix wrong warning

Please fix the warning returned by fail when checking keepalived::vrrp::instance parameters. It expects a String and asks for an Integer.

If you pass an Integer instead of a String to the defined type for either $priority or $virtual_router_id it will fail and ask to pass an Integer instead! This is quite confusing and took me quite some time to figure out! 🙄

https://github.com/arioch/puppet-keepalived/blob/master/manifests/vrrp/instance.pp#L182
https://github.com/arioch/puppet-keepalived/blob/master/manifests/vrrp/instance.pp#L185

Alternatively fix the check and actually expect an integer. Also IMHO it doesn't make a difference in the current implementation whether its a String or an Integer as both will be rendered fine to the actual configuration file.

Hiera lookups and this module

I spent some time working with this module to get it to setup up my LVS and I got it to setup the configuration files.

Below is a fragment of my Hiera file. This goes probably to my still incomplete understanding of puppet and how it sees hiera data as to why I am still a bit confused as to why what I did worked.

My question is that the first (keepalived::vrrp_instance:) is picked up in my puppet/foreman setup without any modifications, other than including keepalived class in a manifest for the node type I have defined as lvs.

The second (keepalived::lvs_virtual_server:) however I needed the following line in the manifest for the node type.

create_resources(keepalived::lvs::virtual_server, hiera(keepalived::lvs_virtual_server), {})

Now I am assuming that this has something to do with the following in the keepalived module.

create_resources(keepalived::vrrp::instance, $::keepalived::vrrp_instance)

Which occurs in the config.pp, but just using

create_resources(keepalived::vrrp::instance, $::keepalived::lvs_virtual_server)

Did not work, hence the hiera call in my code.

So my questions are why this is the case, and what am I doing wrong/right?

Thanks,

Hiera below.

keepalived::vrrp_instance:
VI_1:
interface: 'enp2s0'
state: 'MASTER'
virtual_router_id: '1'
priority: '100'
virtual_ipaddress: '192.168.1.1/24'
track_interface:
- 'enp2s0'

keepalived::lvs_virtual_server:
gftp:
ip_address: '192.168.1.1'
port: '2811'
protocol: 'TCP'
lb_algo: 'wlc'
lb_kind: 'DR'
tcp_check:
connect_timeout: 3
delay_loop: '10'
real_servers:
- ip_address: '192.168.1.2'
port: '2811'

global_def

Could you please provide an example to set global_defs parameters please ?
I'm trying to set router_id but i have error when calling keepalived::global_defs.
Thanks for help.

New release?

Hi,

Sorry to be a pain - can I ask if you're planning to tag new release?

I'm using the new hiera support (thanks for that!) and would like to track a released version of the moduke from the forge rather than pull a particular git hash from Github or maintain a profile::keepalived class to provide the same functionality.

Thanks.

Change "provider" in keepalived::service for Debian Jessie

Hi,

For a "service" resource, the default value of the "provider" attribute is "systemd" in Debian Jessie but it's not correct for the specific case of Keepalived service which uses a init.d script. Currently after each run on Debian Jessie, we have:

Notice: /Stage[main]/Keepalived::Service/Service[keepalived]/enable: enable changed 'false' to 'true'

in this file https://github.com/arioch/puppet-keepalived/blob/master/manifests/service.pp, could it be possible to have something like that:

# == Class keepalived
#
class keepalived::service {

  if $::lsbdistcodename == 'jessie' {
    $provider = 'debian' # to be able to enable init.d script with update-rc.d
  }

  if $::keepalived::service_manage == true {
    service { $::keepalived::service_name:
      ensure     => $::keepalived::service_ensure,
      enable     => $::keepalived::service_enable,
      hasrestart => $::keepalived::service_hasrestart,
      hasstatus  => $::keepalived::service_hasstatus,
      require    => Class['::keepalived::config'],
      restart    => $::keepalived::service_restart,
      provider   => $provider,
    }
  }
}

Regards
François Lafont

forge version 1.2.5 is outdated

the version 1.2.5 on the puppet forge is missing some nice features like the following commit: 8ee0cf8

this is needed to avoid the loop if the ip_vs module isn't loaded:

Keepalived_healthcheckers[1689]: IPVS: Can't initialize ipvs: Protocol not available

sysconf_options => '-vrrp',

Reload keepalived on refresh instead of restart

Restarting keepalived can be a bit disruptive. For the VRRP part, all IP are removed and added back. For the IPVS part, all rules are removed than added back. When using reload instead, Keepalived tries to be smart and only add/remove the IP that need to be added/removed. The same for IPVS. Therefore, it is better to use reload if possible.

Puppet Forge Examples Incorrect

Unlike the example here, specifying one virtual IP like this:

    virtual_ipaddress => '10.0.0.1/29',

creates an error ("undefined method `each' for string" or something).
Something to do with Ruby 1.9+ not transforming it into array, has to be something like:

    virtual_ipaddress => [ '10.0.0.1/29' ],

keepalived::lvs::real_server options fails when using capital letters

When adding options to keepalived::lvs::real_server as per the documentation I get the following error:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Syntax error at 'SMTP_CHECK'; expected '}'

Config snippet:

keepalived::lvs::real_server { 'x.x.x.x':
  virtual_server => 'x.x.x.1',
  ip_address => 'x.x.x.x',
  port       => '143',
  options => { 
    inhibit_on_failure => true, 
    SMTP_CHECK => { 
      host => { 
        connect_ip => '127.0.0.1'
      } 
    } 
  } 
}

If I use lower case for SMTP_CHECK (smtp_check) it works, but it adds smtp_check instead of SMTP_CHECK to the generated keepalived.conf file.

+   real_server x.x.x.x 143 {
+    inhibit_on_failure
+
+    smtp_check {
+      host {
+        connect_ip 127.0.0.1
+      }
+    }

Config snippet:

keepalived::lvs::real_server { 'x.x.x.x':
  virtual_server => 'x.x.x.1',
  ip_address => 'x.x.x.x',
  port       => '143',
  options => { 
    inhibit_on_failure => true, 
    smtp_check => { 
      host => { 
        connect_ip => '127.0.0.1'
      } 
    } 
  } 
}

Default value of keepalived::vrrp::script::weight is different from keepalived's default

The puppet module has a default value of 2 for keepalived::vrrp::script::weight. While the default for this option in keepalived itself is 0 according to https://github.com/acassen/keepalived/blob/master/doc/keepalived.conf.SYNOPSIS which states:

The default weight equals 0, which means that any VRRP instance monitoring the script will transition to the fault state after <fall> consecutive failures of the script.

This misalignment probably occurred due to acassen/keepalived#484

Error: Could not find dependent Exec[concat_/etc/keepalived/keepalived.conf]

I apologize in advance as I am still learning puppet, and am not sure if this is an issue with how I am trying to call the module, or if there is an issue with the module itself that I am running into. I downloaded this module and made sure I have the concat module as well and am receiving this error.

Error: Could not find dependent Exec[concat_/etc/keepalived/keepalived.conf] for File[/var/opt/lib/pe-puppet/concat/_etc_keepalived_keepalived.conf/fragments/010_keepalived.conf_globaldefs] at /tmp/vagrant-puppet-3/modules-0/concat/manifests/fragment.pp:66

It seems similar to issue number #40 but I have verified all of the changes that were made there were already applied to the newest version that I have checked out.

I am calling this module through a profile module I have built for keepalived.
modules/profile/manifests/keepalived.pp #contents below.

class profile::keepalived {

class { '::keepalived::global_defs':
ensure => present,
notification_email => '[email protected]',
notification_email_from => '[email protected]',
smtp_server => 'localhost',
smtp_connect_timeout => '60',
router_id => 'your_router_instance_id',
}

node /node01/ {
include ::keepalived

::keepalived::vrrp::instance { 'VI_50':
  interface         => 'eth0:1',
  state             => 'MASTER',
  virtual_router_id => '50',
  priority          => '101',
  auth_type         => 'PASS',
  auth_pass         => 'secret',
  virtual_ipaddress => [ '192.168.33.210/32' ],
  track_interface   => ['eth0'],  #optional, monitor these interfaces.
}

}

}

Can you please help me figure out what I am doing wrong or what needs changed?

order parameter contains invalid characters

I'm receiving the following error:

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Order cannot contain '/', ':', or '
'. at /etc/puppet/repo/modules/concat/manifests/fragment.pp:46 on node lvs1.example.com

There are invalid characters in the order parameter due to the naming of my virtual servers (for example r-mq-c1:5672) and the following part of keepalived::lvs::virtual_server

concat::fragment { "keepalived.conf_lvs_virtual_server_${name}":
    target  => "${::keepalived::config_dir}/keepalived.conf",
    content => template('keepalived/lvs_virtual_server.erb'),
    order   => "250-${name}-000",
  }

or the following part from keepalived::lvs::real_server

concat::fragment { "keepalived.conf_lvs_real_server_${name}":
    target  => "${::keepalived::config_dir}/keepalived.conf",
    content => template('keepalived/lvs_real_server.erb'),
    order   => "250-${virtual_server}-${name}",
  }

I think, it would be the best to remove invalid characters from $name: fail("Order cannot contain '/', ':', or '\n'.")

Module version: 1.1.1
Concat module version: 1.2.2 (it's the same for 2.0.0)

keepalived::vrrp::instance priority should support the value 255.

ff8f9ab validates priorities, but it does so contrary to the RFC it links to. Specifically, the rationale in the commit says:

However it also goes on to say that 0 and 255 are reserved for special operational use.

Which is correct. 255 is reserved for use by the MASTER (the device who "owns" the IP). 0 is also reserved by the MASTER to indicate a "going down" state.

The priority value for the VRRP router that owns the IP address(es) associated with the virtual router MUST be 255 (decimal).

The priority value zero (0) has special meaning indicating that the current Master has stopped participating in VRRP.

I think the current checks could be extended to allow the value 255 if the state is equal to MASTER. Similarly for the value of 0. Docs should be updated to reflect that the MASTER should be assigned the value 255. I'd also suggest a warning or notice for users not using the value 255, so that you can guide users to the correct value without breaking code, allowing a smoother transition if you choose to do so later.

Note that this is the first time using VRRP for me so if I've made a mistake or overstepped please let me know. I was following advice from my networking team which led me to discover the failing 255 value - and then subsequently hunting down the RFC.

Add Ubuntu to Metadata

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: All
  • Ruby: All
  • Distribution: Ubuntu
  • Module version: 2.0.0

What are you seeing

metadata.json does not include any Ubuntu versions. I am requesting that recent versions be added.

What behaviour did you expect instead

I believe the module is already compatible with Ubuntu since it falls under the "osfamily" fact of Debian which is already implemented. I have tested on 16.04 and 18.04 without issue.

You cannot collect exported resources without storeconfigs being set

Hi,

i get the follow issue on puppet master:

Jul 17 11:28:03 sev34 puppet-master[5815]: You cannot collect exported resources without storeconfigs being set; the collection will be ignored on line 159 in file /etc/puppet/environments/production/modules/keepalived/manifests/lvs/virtual_server.pp

The collect export is disable in manifest:

keepalived::lvs::virtual_server {
   collect_exported    => false,
}

I dont use storeconfig in puppet.conf

Regards Karsten

VIP for sandby ip ?

Hi,

We are running mysql cluster and need to perform daily backup from standby node, and seems have standby VIP is easiest option here. Is it possible, to use such module for creating vitrual ip address on standby node?

Thanks in advance!

Comparison of: String < Integer, is not possible

Could not retrieve catalog from remote server: Error 400 on SERVER: Evaluation Error: Comparison of: String < Integer, is not possible. Caused by 'A String is not comparable to a non String'. at /etc/puppetlabs/code/environments/production/modules/keepalived/manifests/vrrp/instance.pp:171:43

Setup Keepalived for HAProxy failover

include ::keepalived

keepalived::vrrp::instance { 'VI_50':
    interface         => 'eth0',
    state             => 'MASTER',
    virtual_router_id => '50',
    priority          => '101',
    auth_type         => 'PASS',
    auth_pass         => 'secret',
    virtual_ipaddress => [ '10.0.0.22/24' ],
    track_interface   => ['eth0'], # optional, monitor these interfaces.
    track_script      => 'check_haproxy',
}

Anyone have idea why?

No changelog available

We're currently using 1.2.5 of this module, and we'd like to update it to a newer version.
The lack of a changelog makes this process more complex than needed.

Can you add a Changelog to this project? If possible with the historic data

VirtualIP address not being overwritten on change

Just noticed this when I was testing some Vagrant stuff and changed the virtual address:

keepalived::vrrp::instance { 'VI_50':
    interface         => 'eth1',
    state             => 'MASTER',
    virtual_router_id => '50',
    priority          => '101',
    auth_type         => 'PASS',
    auth_pass         => 'secret',
    virtual_ipaddress => '10.0.0.1/29',
  }

change and run again:

keepalived::vrrp::instance { 'VI_50':
    interface         => 'eth1',
    state             => 'MASTER',
    virtual_router_id => '50',
    priority          => '101',
    auth_type         => 'PASS',
    auth_pass         => 'secret',
    virtual_ipaddress => '10.0.0.2/29',
  }

File stays the same:
/etc/keepalived/keepalived.conf

# Managed by Puppet
vrrp_instance VI_50 {
  interface                 eth1
  state                     MASTER
  virtual_router_id         50
  priority                  101

  authentication {
    auth_type PASS
    auth_pass secret
  }

  virtual_ipaddress {
    10.0.0.1/29 dev eth1
  }

}

Concat issue

Hi there! Not necessarily an issue so much as a "warning". I had concat 2.1.0 installed and "randomly" every puppet run the ordering of my 2 hiera-based VRRP instances were swapping. concat 2.2.0 actually fixes this behavior, so FYI in case it comes up!

Please add support for "native_ipv6" vrrp_instance statement

Please add support for "native_ipv6" vrrp_instance statement to allow a instance to be forced to use ipv6. A ipv6 vrrp_instance without the "native_ipv6" keyword does not configure the virtual ipv6 address with the "deprecated nodad" options.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.