Git Product home page Git Product logo

puppet-nfs's Introduction

puppet-nfs

Build Status Release Puppet Forge Puppet Forge - downloads Puppet Forge - endorsement Puppet Forge - scores puppetmodule.info docs Apache-2.0 License Donated by Daniel Klockenkaemper

Table of Contents

  1. Module Description - What the module does and why it is useful
  2. Setup - The basics of getting started with puppet-nfs
  3. Usage - Configuration options and additional functionality
  4. Reference - An under-the-hood peek at what the module is doing and how
  5. Limitations - OS compatibility, etc.
  6. Development - Guide for contributing to the module

Module Description

Github Master: Test Suite

This module installs, configures and manages everything on NFS clients and servers.

This module is a complete refactor of the module haraldsk/nfs, because Harald Skoglund sadly is not maintaining his module actively anymore. It is stripped down to use only the class 'nfs' and parametrized to act as a server, client or both with the parameters 'server_enabled' and 'client_enabled'. It also has some dependencies on newer stdlib functions like 'difference'.

It supports the OS Families Ubuntu, Debian, Redhat, SUSE, Gentoo and Archlinux. It supports also Strict Variables, so if you pass all OS specific parameters correctly it should work on your preferred OS too. Feedback, bugreports, and feature requests are always welcome, visit https://github.com/voxpupuli/puppet-nfs or send me an email.

When you are using a puppet version 3.x like it is shipped with Redhat Satellite 6, please use a version 1.x.x from puppet forge or the branch puppet3 when cloning directly from Github. (Note: #49 (comment)). I'll recommend using puppet >= 4.6.1, puppet versions up until 4.6.0 had various issues.

If you want to contribute, please do a fork on github, create a branch "feature name" with your features and do a pull request.

Warning: I've introduced new dependencies with version 2.1.0 which were needed to fix buggy rpcbind-socket restarting with systemd:

  • puppetlabs/transition
  • herculesteam/augeasproviders_core
  • herculesteam/augeasproviders_shellvar

Setup

What puppet-nfs affects

This module can be used to configure your nfs client and/or server, it could export nfs mount resources via storeconfigs or simply mount nfs shares on a client. You can also easily use the create_resources function when you store your exports i.e. via hiera.

Setup requirements

This Module depends on puppetlabs-stdlib >= 4.5.0 and puppetlabs-concat >= 1.1.2, you need to have these modules installed to use puppet-nfs module.

Beginning with puppet-nfs

On a nfs server the following code is sufficient to get all packages installed and services running to use nfs:

  node server {
    class { '::nfs':
      server_enabled => true,
    }
  }

On a client the following code is sufficient:

  node server {
    class { '::nfs':
      client_enabled => true,
    }
  }

Usage

Simple NFSv3 server and client example

This will export /data_folder on the server and automagically mount it on client.

  node server {
    class { '::nfs':
      server_enabled => true
    }
    nfs::server::export{ '/data_folder':
      ensure  => 'mounted',
      clients => '10.0.0.0/24(rw,insecure,async,no_root_squash) localhost(rw)'
    }
  }

  # By default, mounts are mounted in the same folder on the clients as
  # they were exported from on the server
  node client {
    class { '::nfs':
      client_enabled => true,
    }
    Nfs::Client::Mount <<| |>>
  }

Simple NFSv4 client example

This will mount /data on client in /share/data.

  node client {
    class { '::nfs':
      server_enabled => false,
      client_enabled => true,
      nfs_v4_client => true,
      nfs_v4_idmap_domain => $::domain,
    }

    nfs::client::mount { '/share/data':
        server => '192.168.0.1',
        share => 'data',
    }
  }

NFSv3 multiple exports, servers and multiple node example

  node server1 {
    class { '::nfs':
      server_enabled => true,
    }
    nfs::server::export { '/data_folder':
      ensure  => 'mounted',
      clients => '10.0.0.0/24(rw,insecure,async,no_root_squash) localhost(rw)',
    }
    nfs::server::export { '/homeexport':
      ensure  => 'mounted',
      clients => '10.0.0.0/24(rw,insecure,async,root_squash)',
      mount   => '/srv/home',
    }
  }

  node server2 {
    class { '::nfs':
      server_enabled => true,
    }
    # ensure is passed to mount, which will make the client not mount it
    # the directory automatically, just add it to fstab
    nfs::server::export { '/media_library':
      ensure  => 'present',
      nfstag     => 'media',
      clients => '10.0.0.0/24(rw,insecure,async,no_root_squash) localhost(rw)',
    }
  }

  node client {
    class { '::nfs':
      client_enabled => true,
    }
    Nfs::Client::Mount <<| |>>
  }

  # Using a storeconfig override, to change ensure option, so we mount
  # all shares
  node greedy_client {
    class { '::nfs':
      client_enabled => true,
    }
    Nfs::Client::Mount <<| |>> {
      ensure => 'mounted',
    }
  }


  # only the mount tagged as media
  # also override mount point
  node media_client {
    class { '::nfs':
      client_enabled => true,
    }
    Nfs::Client::Mount <<| nfstag == 'media' |>> {
      ensure => 'mounted',
      mount  => '/import/media',
    }
  }

  # All @@nfs::server::mount storeconfigs can be filtered by parameters
  # Also all parameters can be overridden (not that it's smart to do
  # so).
  # Check out the doc on exported resources for more info:
  # http://docs.puppetlabs.com/guides/exported_resources.html
  node single_server_client {
    class { '::nfs':
      client_enabled => true,
    }
    Nfs::Client::Mount <<| server == 'server1' |>> {
      ensure => 'absent',
    }
  }

NFSv4 Simple example

  # We use the $::domain fact for the Domain setting in
  # /etc/idmapd.conf.
  # For NFSv4 to work this has to be equal on servers and clients
  # set it manually if unsure.
  #
  # All nfsv4 exports are bind mounted into /export/$mount_name
  # and mounted on /srv/$mount_name on the client.
  # Both values can be overridden through parameters both globally

  # and on individual nodes.
  node server {
    file { ['/data_folder', '/homeexport']:
      ensure => 'directory',
    }
    class { '::nfs':
      server_enabled => true,
      nfs_v4 => true,
      nfs_v4_idmap_domain => 'example.com',
      nfs_v4_export_root  => '/export',
      nfs_v4_export_root_clients => '*(rw,fsid=0,insecure,no_subtree_check,async,no_root_squash)',
    }
    nfs::server::export { '/data_folder':
      ensure  => 'mounted',
      clients => '*(rw,insecure,async,no_root_squash,no_subtree_check)',
    }
    nfs::server::export { '/homeexport':
      ensure  => 'mounted',
      clients => '*(rw,insecure,async,root_squash,no_subtree_check)',
      mount   => '/srv/home',
    }
  }

  # By default, mounts are mounted in the same folder on the clients as
  # they were exported from on the server

  node client {
    class { '::nfs':
      client_enabled  => true,
      nfs_v4_client   => true,
    }
    Nfs::Client::Mount <<| |>>
  }

  # We can also mount the NFSv4 Root directly through nfs::client::mount::nfsv4::root.
  # By default /srv will be used for as mount point, but can be overriden through
  # the 'mounted' option.

  node client2 {
    $server = 'server'
    class { '::nfs':
      client_enabled => true,
      nfs_v4_client  => true,
    }
    Nfs::Client::Mount::Nfs_v4::Root <<| server == $server |>> {
      mount => "/srv/${server}",
    }
  }

NFSv4 insanely overcomplicated reference example

  # and on individual nodes.
  node server {
    class { '::nfs':
      server_enabled      => true,
      nfs_v4              => true,
      # Below are defaults
      nfs_v4_idmap_domain => $::domain,
      nfs_v4_export_root  => '/export',
      # Default access settings of /export root
      nfs_v4_export_root_clients =>
        "*.${::domain}(ro,fsid=root,insecure,no_subtree_check,async,root_squash)",


    }
    nfs::server::export { '/data_folder':
      # These are the defaults
      ensure  => 'mounted',
      # rbind or bind mounting of folders bindmounted into /export
      # google it
      bind    => 'rbind',
      # everything below here is propogated by to storeconfigs
      # to clients
      #
      # Directory where we want export mounted on client
      mount     => undef,
      remounts  => false,
      atboot    => false,
      #  Don't remove that option, but feel free to add more.
      options_nfs   => '_netdev',
      # If set will mount share inside /srv (or overridden mount_root)
      # and then bindmount to another directory elsewhere in the fs -
      # for fanatics.
      bindmount => undef,
      # Used to identify a catalog item for filtering by by
      # storeconfigs, kick ass.
      nfstag     => 'kick-ass',
      # copied directly into /etc/exports as a string, for simplicity
      clients => '10.0.0.0/24(rw,insecure,no_subtree_check,async,no_root_squash)',
    }
  }

  node client {
    class { '::nfs':
      client_enabled      => true,
      nfs_v4_client       => true,
      nfs_v4_idmap_domain => $::domain,
      nfs_v4_mount_root   => '/srv',
    }

    # We can as you by now know, override options set on the server
    # on the client node.
    # Be careful. Don't override mount points unless you are sure
    # that only one export will match your filter!

    Nfs::Client::Mount <<| nfstag == 'kick-ass' |>> {
      # Directory where we want export mounted on client
      mount       => undef,
      remounts    => false,
      atboot      => false,
      #  Don't remove that option, but feel free to add more.
      options_nfs => '_netdev',
      # If set will mount share inside /srv (or overridden mount_root)
      # and then bindmount to another directory elsewhere in the fs -
      # for fanatics.
      bindmount   => undef,
    }
  }

Simple create nfs export resources with hiera example

Hiera Server Role:

  classes:
    - nfs

  nfs::server_enabled: true
  nfs::client_enabled: false
  nfs::nfs_v4: true
  nfs::nfs_v4_idmap_domain: %{::domain}
  nfs::nfs_v4_export_root: '/share'
  nfs::nfs_v4_export_root_clients: '192.168.0.0/24(rw,fsid=root,insecure,no_subtree_check,async,no_root_squash)'


  nfs::nfs_exports_global:
    /var/www: {}
    /var/smb: {}

Hiera Client Role:

  classes:
    - nfs

  nfs::client_enabled: true
  nfs::nfs_v4_client: true
  nfs::nfs_v4_idmap_domain: %{::domain}
  nfs::nfs_v4_mount_root: '/share'
  nfs::nfs_server: 'nfs-server-fqdn'

Puppet:

  node server {
    hiera_include('classes')
    $nfs_exports_global = hiera_hash('nfs::nfs_exports_global', false)

    $defaults_nfs_exports = {
      ensure  => 'mounted',
      clients => '192.168.0.0/24(rw,insecure,no_subtree_check,async,no_root_squash)',
      nfstag     => $::fqdn,
    }

    if $nfs_exports_global {
      create_resources('::nfs::server::export', $nfs_exports_global, $defaults_nfs_exports)
    }
  }

  node client {
    hiera_include('classes')
    $nfs_server = hiera('nfs::nfs_server', false)

    if $nfs_server {
      Nfs::Client::Mount <<| nfstag == $nfs_server |>>
    }
  }

Reference

Classes

Public Classes

  • nfs: Main class, includes all other classes

Public Defines

Private Classes

  • nfs::client: Includes all relevant classes for configuring as a client.

  • nfs::client::config: Handles the configuration files.

  • nfs::client::package: Handles the packages.

  • nfs::client::service: Handles the services.

  • nfs::server: Includes all relevant classes for configuring as a server.

  • nfs::server::config: Handles the configuration files.

  • nfs::server::package: Handles the packages.

  • nfs::server::service: Handles the services.

  • nfs::params: Includes all os specific parameters.

Private Defines

  • nfs::bindmount: Creates the bindmounts of nfs 3 exports.
  • nfs::nfsv4_bindmount: Creates the bindmounts of nfs 4 exports.
  • nfs::create_export: Creates the nfs exports.
  • nfs::mkdir: Creates directories recursive.

Parameters

Class: ::nfs

The following parameters are available in the ::nfs class:

ensure

String. Controls if the managed resources shall be present or absent. If set to absent:

  • The managed software packages are being uninstalled.
  • Any traces of the packages will be purged as good as possible. This may include existing configuration files. The exact behavior is provider dependent. Q.v.:
  • System modifications (if any) will be reverted as good as possible (e.g. removal of created users, services, changed log settings, ...).
  • This is thus destructive and should be used with care. Defaults to present.
server_enabled

Boolean. If set to true, this module will configure the node to act as a nfs server.

client_enabled

Boolean. If set to true, this module will configure the node to act as a nfs client, you can use the exported mount resources from configured servers.

storeconfigs_enabled

Boolean. If set to false, this module will not export any resources as storeconfigs. Defaults to true.

nfs_v4

Boolean. If set to true, this module will use nfs version 4 for exporting and mounting nfs resources. It defaults to true.

nfs_v4_client

Boolean. If set to true, this module will use nfs version 4 for mounting nfs resources. If set to false it will use nfs version 3 to mount nfs resources. It defaults to true.

exports_file

String. It defines the location of the file with the nfs export resources used by the nfs server.

idmapd_file

String. It defines the location of the file with the idmapd settings.

defaults_file

String. It defines the location of the file with the nfs settings.

manage_packages

Boolean. It defines if the packages should be managed through this module

server_packages

Array. It defines the packages needed to be installed for acting as a nfs server

server_package_ensure

String. It defines the packages state - any of present, installed, absent, purged, held, latest

client_packages

Array. It defines the packages needed to be installed for acting as a nfs client

client_package_ensure

String. It defines the packages state - any of present, installed, absent, purged, held, latest

manage_server_service

Boolean. Defines if module should manage server_service

manage_server_servicehelper

Boolean. Defines if module should manage server_servicehelper

manage_client_service

Boolean. Defines if module should manage client_service

server_service_name

String. It defines the servicename of the nfs server service

server_service_ensure

Boolean. It defines the service parameter ensure for nfs server services.

server_service_enable

Boolean. It defines the service parameter enable for nfs server service.

server_service_hasrestart

Boolean. It defines the service parameter hasrestart for nfs server service.

server_service_hasstatus

Boolean. It defines the service parameter hasstatus for nfs server service.

server_service_restart_cmd

String. It defines the service parameter restart for nfs server service.

server_nfsv4_servicehelper

Array. It defines the service helper like idmapd for servers configured with nfs version 4.

client_services

Nested Hash. It defines the servicenames need to be started when acting as a nfs client

client_nfsv4_services

Nested Hash. It defines the servicenames need to be started when acting as a nfs client version 4.

client_service_hasrestart

Boolean. It defines the service parameter hasrestart for nfs client services.

client_service_hasstatus

Boolean. It defines the service parameter hasstatus for nfs client services.

client_idmapd_setting

Array. It defines the Augeas parameter added in defaults_file when acting as a nfs version 4 client.

client_nfs_fstype

String. It defines the name of the nfs filesystem, when adding entries to /etc/fstab on a client node.

client_nfs_options

String. It defines the options for the nfs filesystem, when adding entries to /etc/fstab on a client node.

client_nfsv4_fstype

String. It defines the name of the nfs version 4 filesystem, when adding entries to /etc/fstab on a client node.

client_nfsv4_options

String. It defines the options for the nfs version 4filesystem, when adding entries to /etc/fstab on a client node.

nfs_v4_export_root

String. It defines the location where nfs version 4 exports should be bindmounted to on a server node. Defaults to /export.

nfs_v4_export_root_clients

String. It defines the clients that are allowed to mount nfs version 4 exports and includes the option string. Defaults to *.${::domain}(ro,fsid=root,insecure,no_subtree_check,async,root_squash).

nfs_v4_mount_root

String. It defines the location where nfs version 4 clients find the mount root on a server node. Defaults to /srv.

nfs_v4_idmap_domain

String. It defines the name of the idmapd domain setting in idmapd_file needed to be set to the same value on a server and client node to do correct uid and gid mapping. Defaults to $::domain.

nfsv4_bindmount_enable

Boolean. It defines if the module should create a bindmount for the export. Defaults to true.

client_need_gssd

Boolean. If true, sets NEED_GSSD=yes in /etc/defauls/nfs-common, usable on Debian/Ubuntu

client_gssd_service

Boolean. If true enable rpc-gssd service.

client_gssd_options

String. Options for rpc-gssd service. Defaults to ''

client_d9_gssdopt_workaround

Boolean. If enabled, workaround for passing gssd_options which is broken on Debian 9. Usable only on Debian 9

nfs_v4_idmap_localrealms

String or Array. 'Local-Realms' option for idmapd. Defaults to ''

nfs_v4_idmap_cache

Integer. 'Cache-Expiration' option for idmapd. Defaults to 0 - unused.

manage_nfs_v4_idmap_nobody_mapping

Boolean. Enable setting Nobody mapping in idmapd. Defaults to false.

nfs_v4_idmap_nobody_user

String. 'Nobody-User' option for idmapd. Defaults to nobody.

nfs_v4_idmap_nobody_group

String. 'Nobody-Group' option for idmapd. Defaults to nobody or nogroup.

client_rpcbind_config

String. It defines the location of the file with the rpcbind config.

client_rpcbind_optname

String. It defines the name of env variable that holds the rpcbind config. E.g. OPTIONS for Debian

client_rpcbind_opts

String. Options for rpcbind service.

Define: ::nfs::client::mount

The following parameters are available in the ::nfs::client::mount define:

server

String. Sets the ip address of the server with the nfs export

share

String. Sets the name of the nfs share on the server

ensure

String. Sets the ensure parameter of the mount.

remounts

String. Sets the remounts parameter of the mount.

atboot

String. Sets the atboot parameter of the mount.

options_nfsv4

String. Sets the mount options for a nfs version 4 mount.

options_nfs

String. Sets the mount options for a nfs mount.

bindmount

String. When not undef it will create a bindmount on the node for the nfs mount.

nfstag

String. Sets the nfstag parameter of the mount.

nfs_v4

Boolean. When set to true, it uses nfs version 4 to mount a share.

owner

String. Set owner of mount dir

group

String. Set group of mount dir

mode

String. Set mode of mount dir

mount_root

String. Overwrite mount root if differs from server config

Define: ::nfs::server::export

The following parameters are available in the ::nfs::server::export define:

clients

String. Sets the allowed clients and options for the export in the exports file. Defaults to localhost(ro)

bind

String. Sets the bind options setted in /etc/fstab for the bindmounts created. Defaults to rbind. When you have any submounts in your exported folders, the rbind option will submount them in the bindmount folder. You have to set the  crossmnt option in your nfs export to have the submounts from rbind available on your client. Your export should look like this:

node client {
  nfs::server::export { '/home':
    ensure  => 'mounted',
    clients => '*(rw,insecure,no_subtree_check,async,no_root_squash,crossmnt)',
  }
}
ensure

String. If enabled the mount will be created. Defaults to mounted

remounts

String. Sets the remounts parameter of the mount.

atboot

String. Sets the atboot parameter of the mount.

options_nfsv4

String. Sets the mount options for a nfs version 4 exported resource mount.

options_nfs

String. Sets the mount options for a nfs exported resource mount.

bindmount

String. When not undef it will create a bindmount on the node for the nfs mount.

nfstag

String. Sets the nfstag parameter of the mount.

mount

String. Sets the mountpoint the client will mount the exported resource mount on. If undef it defaults to the same path as on the server

owner

String. Sets the owner of the exported directory

group

String. Sets the group of the exported directory

mode

String. Sets the permissions of the exported directory.

Requirements

Modules needed:

puppetlabs/stdlib >= 4.5.0 puppetlabs/concat >= 1.1.2

Software versions needed:

facter > 1.6.2 puppet > 3.2.0

Ruby Gems needed:

augeas

Limitations

If you want to have specific package versions installed you may manage the needed packages outside of this module (use manage_packages => false). It is only tested to use 'present', 'installed', 'absent', 'purged', 'held' and 'latest' as argument for the parameters server_package_ensure and client_package_ensure.

Development

Derdanne modules are open projects. So if you want to make this module even better, you can contribute to this module on Github.

Before pushing PRs to Github i would recommend you to test your work locally. So you can ensure all test builds on Travis CI were passing. I have prepared an easy way to test your code locally with the help of Docker.

For running the complete static code analysis, it is sufficient to run a make test-all.

Default settings

I have set some defaults which you can change by setting the following environment variables.

PUPPET_VERSION

Changes the puppet version which will be used for the tests. Defaults to 6.0.

STRICT_VARIABLES

Sets strict variables on or off. Defaults to yes.

RVM

Sets the ruby version which will be used for the tests. Defaults to 2.4.1.

BEAKER_set

Sets the beaker docker target host. Defaults to ubuntu-20.04.

PUPPET_collection

Sets the puppet version for acceptance tests. Defaults to puppet6.

Running tests

You can run the following commands to setup and run the testsuite on your local machine.

make build

Build a docker image with a Ruby version which is not available on Docker hub. Check out https://hub.docker.com/r/derdanne/rvm/ to see if i have already prepared a rvm build for the ruby version you want to test. Take a look at the Dockerfile located in spec/local-testing if you want to customize your builds.

make pull

Pull a prebuild rvm docker image with the Ruby version defined in the variable RVM.

make install-gems

Install all needed gems locally to vendor/bundle.

make test-metadata-lint

Run linting of metadata.

make test-lint

Run puppet lint tests.

make test-syntax

Run syntax tests.

make test-rspec

Run rspec puppet tests.

make test-rubocop

Run rubocop tests.

make test-all

Run the whole testsuite.

make test-beaker

Run puppetlabs beaker rspec tests.

Disclaimer

This module based on Harald Skoglund [email protected] from https://github.com/haraldsk/puppet-module-nfs/ but has been fundementally refactored

Transfer Notice

This plugin was originally authored by Daniel Klockenkaemper [email protected]. The maintainer preferred that Vox Pupuli take ownership of the module for future improvement and maintenance. Existing pull requests and issues were transferred over, please fork and continue to contribute here instead of Camptocamp.

Previously: https://github.com/derdanne/puppet-nfs

puppet-nfs's People

Contributors

achterin avatar antoine-habran avatar bastelfreak avatar bschonec avatar cmd-ntrf avatar derdanne avatar ekohl avatar faxm0dem avatar fraenki avatar hp197 avatar igalic avatar jadestorm avatar jehane avatar jhooyberghs avatar joris29 avatar jvginkel avatar kwisatz avatar martyewings avatar neomilium avatar oberon227 avatar qs5779 avatar robinbowes avatar sandwitch avatar tampakrap avatar threepistons avatar tobixen avatar towo avatar tuxmea avatar uvnikita avatar veronesip avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

puppet-nfs's Issues

Basic server example fails if there is no default domain configured in /etc/resolv.conf

To recreate:
latest version of module
latest version of all deps
puppet version 4.4.1

manifest:

class { '::nfs':
      server_enabled => true,
    }

invocation:

puppet apply nfs.pp

result:

[root@testhost ciu-puppet]# puppet apply nfs.pp
Warning: Undefined variable 'domain';
   (file & line not available)
Error: Evaluation Error: Error while evaluating a Resource Statement, Class[Nfs]: parameter 'nfs_v4_idmap_domain' expects a String value, got Undef at /home/jeffa/nfs.pp:1:1 on node testhost

rpcbind get re-enabled each time

Hi

i have this on each puppet run

Notice: /Stage[main]/Nfs::Client::Service/Service[rpcbind]/enable: enable changed 'false' to 'true'

on ubuntu 16.04
Linux w1 4.4.0-81-generic #104-Ubuntu SMP Wed Jun 14 08:17:06 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

root@www1:~# puppet --version
5.3.3

manifest very simple:

class { '::nfs':
client_enabled => true,
nfs_v4_client => true,
}
Nfs::Client::Mount <<| |>>

rpcbind installed and running

root@w1:# dpkg --get-selections |grep rpcbind
rpcbind install
root@w1:
# ps auxwww|grep rpc
root 6154 0.0 0.1 49676 3392 ? Ss Dec03 0:01 /sbin/rpcbind -f -w
root 7365 0.0 0.0 0 0 ? S< Dec03 0:00 [rpciod]
statd 12426 0.0 0.1 37416 2852 ? Ss Dec04 0:00 /sbin/rpc.statd --no-notify

any ideas ?
thanks

CentOS7 - NFS mount doesn't survive a reboot

It appears that on CentOS7, the default setting for ::nfs::client_services_enable (false) leads to a scenario where rpcbind.service is running, but disabled - which means that systemd doesn't restart it after a reboot

The practical upshot is that after a reboot the NFS mount doesn't automatically re-establish (until after the next puppet run starts the service, which in my environment might be as much as an hour later)

Additionally, I now get the following on every puppet run:

Notice: /Stage[main]/Nfs::Client::Service/Service[rpcbind.service]/enable: enable changed 'true' to 'false'

This is the inverse of the behaviour described in #55!

I've been able to work around this by overriding the variable and setting it to true in the profile I'm using, and that seems to have no ill effect. Puppet runs are now clean, with no idempotency problems.

I'm running the agent from PE2016-5 (4.8.1) - so I suspect the flip in behaviour for handling 'indirect' systemd services is possibly related to the puppet agent version.

I'm not sure what the best way to approach fixing this in the module is, as it seems that setting client_services_enable to true on CentOS7 causes idempotency problems for some people, but setting it to false appears to cause remounting at boot to fail.

Error when not managing server

I'm only using the client-side functionslity of the module (my server is a SmartOS box).

I see this error when I run puppet:

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid relationship: Service[rpcbind] { subscribe => Concat[/etc/exports] }, because Concat[/etc/exports] doesn't seem to be in the catalog
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run

I can "fix" this by commenting out line 25 (and 34 for nfs v3) of manifests/client/service.pp:

...
 24       hasstatus  => $::nfs::client_services_hasstatus,
 25       #subscribe  => [ Concat[$::nfs::exports_file], Augeas[$::nfs::idmapd_file] ]
 26     }
...

I've not spent any time looking at a more elegant solution yet.

Puppet stuck modifying rpc services on every run with NFS v4 client.

I setup an NFS v4 client using the following basic configuration.

class { '::nfs':
  client_enabled      => true,
  nfs_v4_client       => true,
  nfs_v4_idmap_domain => $facts['domain'],
}

Nfs::Client::Mount <<| nfstag == 'utilsrv1' |>>

Puppet seems to be stuck permanently making the following correct changes on every run.

Service[rpcbind.service]
enable changed 'true' to 'false'

Service[rpcbind.socket]
enable changed 'false' to 'true'

I'm running Puppet Enterprise 2016.5.2 and my client server is running Oracle Linux 7.4.

puppet-nfs fails on Fedora 22

On Fedora 22 nfs service is called nfs-server

from facter:

name => "Fedora",
  release => {
    full => "22",
    major => "22"
  }

[root@master ~]# systemctl is-enabled nfs-server
enabled
[root@master ~]# systemctl is-enabled nfs
disabled

More issues with nfs-idmapd.service on RHEL7

I see there have been a few closed issues on this, but I still have a related problem.

With the configuration:

  class { '::nfs':
    server_enabled => false,
    client_enabled => true,
    nfs_v4_client  => true,
  }

  nfs::client::mount { '/mnt/nfs_storage':
    server => 'some.server.somewhere',
    share  => '/path/to/share',
  }

I get the following:

Notice: /Stage[main]/Nfs::Client::Service/Service[rpcbind.service]/enable: enable changed 'false' to 'true'
Error: Could not enable nfs-idmap.service:
Error: /Stage[main]/Nfs::Client::Service/Service[nfs-idmap.service]/ensure: change from stopped to running failed: Could not enable nfs-idmap.service:

Could not find init script or upstart conf file for 'nfs-lock'

Got this on Ubuntu 14.04 with puppet 4.2.0

Debug: Servicenfs-lock: Could not find nfs-lock.conf in /etc/init
Debug: Servicenfs-lock: Could not find nfs-lock.conf in /etc/init.d
Debug: Servicenfs-lock: Could not find nfs-lock in /etc/init
Debug: Servicenfs-lock: Could not find nfs-lock in /etc/init.d
Debug: Servicenfs-lock: Could not find nfs-lock.sh in /etc/init
Debug: Servicenfs-lock: Could not find nfs-lock.sh in /etc/init.d
Error: /Stage[main]/Nfs::Client::Service/Service[nfs-lock]: Could not evaluate: Could not find init script or upstart conf file for 'nfs-lock'
Debug: Executing: '/sbin/status idmapd'
Debug: Executing: '/sbin/initctl --version'
Debug: Class[Nfs::Client::Service]: Resource is being skipped, unscheduling all events
Notice: /Stage[main]/Nfs::Client/Anchor[nfs::client::end]: Dependency Service[nfs-lock] has failures: true
Warning: /Stage[main]/Nfs::Client/Anchor[nfs::client::end]: Skipping because of failed dependencies
Debug: /Stage[main]/Nfs::Client/Anchor[nfs::client::end]: Resource is being skipped, unscheduling all events
Debug: Class[Nfs::Client]: Resource is being skipped, unscheduling all events

I removed nfs-lock from line 102 in params.pp haven't found anything that it is included in debian.

rpcbind changes on every run 'false' to 'true'

On Ubuntu 16.04 rpcbind changes on every run 'false' to 'true'

Notice: /Stage[main]/Nfs::Client::Service/Service[rpcbind]/enable: enable changed 'false' to 'true'

class { '::nfs':
server_enabled => false,
client_enabled => true,
nfs_v4_client => true,
nfs_v4_idmap_domain => $::domain,
}
nfs::client::mount { '/volume1/nfs':
server => 'nfs01',
share => '/volume1/nfs',
}

service rpcbind status
● rpcbind.service - RPC bind portmap service
Loaded: loaded (/lib/systemd/system/rpcbind.service; indirect; vendor preset: enabled)
Drop-In: /run/systemd/generator/rpcbind.service.d
└─50-rpcbind-$portmap.conf
Active: active (running) since Thu 2016-06-30 16:34:55 CEST; 47min ago
Main PID: 12678 (rpcbind)
CGroup: /system.slice/rpcbind.service
└─12678 /sbin/rpcbind -f -w

Jun 30 16:34:55 host01 systemd[1]: Starting RPC bind portmap service...
Jun 30 16:34:55 host01 systemd[1]: Started RPC bind portmap service.

Static ports / server threads configuration bits

Hi Daniel,

first of all: thanks for your great work on this module! :-) I would like to ensure my NFS server behaves well in combination with my firewall, thus I need to configure things.

Would you think that configuring static ports for nfs / rpcbind / nfs-kernel-server would be within the scope of this module?

What about the server threads? Is this beyond the scope of this module?

Cheers,
Oliver

Reference:

  1. https://www.stephenrlang.com/2015/12/setup-nfsv3-on-ubuntu-or-debian/
  2. https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/s2-nfs-nfs-firewall-config.html

Fails mounting exported resources

Exported resource mounts currently do not work -

server/export.pp

@@nfs::client::mount { "shared ${v4_export_name} by ${::clientcert}":
ensure => $ensure,
remounts => $remounts,
atboot => $atboot,
options_nfsv4 => $options_nfsv4,
bindmount => $bindmount,
nfstag => $nfstag,
share => $v4_export_name,
server => $::clientcert,
mount => $mount,
}

client/mount.pp

define nfs::client::mount (
$server,
$share = undef,
$ensure = 'mounted',
$mount = $title,
$remounts = false,
$atboot = false,
$options_nfsv4 = $::nfs::client_nfsv4_options,
$options_nfs = $::nfs::client_nfs_options,
$bindmount = undef,
$nfstag = undef,
$nfs_v4 = $::nfs::client::nfs_v4,
$owner = undef,
$group = undef,
$mode = undef,
$mount_root = undef,

mount { "shared ${sharename} by ${server} on ${mount}":
ensure => $ensure,
device => "${server}:${sharename}",
fstype => $::nfs::client_nfsv4_fstype,
name => $mount,
options => $options_nfsv4,
remounts => $remounts,
atboot => $atboot,
require => Nfs::Functions::Mkdir[$mount],
}

Currently the mount resource in client/mount.pp uses name => $mount which then resolves to the $title variable passed from the exported resource. The mount resource then attempts to mount "shared /mount_name by clientcert" based on server/export.pp.

$mount = $title needs to be changed to $mount = $share in client/mount.pp based on the parameters in the exported resource.

PuppetDB output -

"parameters" : {
"share" : "/export",
"atboot" : false,
"ensure" : "mounted",
"server" : "fqdn.fqdn.fqdn",
"remounts" : false,
"options_nfs" : "tcp,nolock,rsize=32768,wsize=32768,intr,noatime,actimeo=3"
},

Replace idmapd service by nfs-common service on Debian Jessie

In the params file, you list both rpcbind and idmapd as services to be started for Debian Jessie:

$client_nfsv4_services = {'rpcbind' => {}, 'idmapd' => {}}

However, on my Jessie nodes, there is no idmapd service, but rather an nfs-common service (which – if I understood correctly – starts the rpc.idmap daemon through the setting in /etc/defaults/nfs-common)

Debug output from puppet run:

Debug: Augeas[/etc/default/nfs-common](provider=augeas): sending command 'set' with params ["/files//etc/default/nfs-common/NEED_IDMAPD", "yes"]
Debug: Augeas[/etc/default/nfs-common](provider=augeas): Skipping because no files were changed
Debug: Augeas[/etc/default/nfs-common](provider=augeas): Closed the augeas connection
Debug: Executing '/usr/sbin/service rpcbind status'
Debug: Executing '/bin/systemctl show -pSourcePath rpcbind'
Debug: Executing '/usr/sbin/service idmapd status'
Debug: Executing '/bin/systemctl show -pSourcePath idmapd'
Debug: Executing '/bin/systemctl is-enabled idmapd'
Debug: Executing '/usr/sbin/service idmapd start'
Error: Could not start Service[idmapd]: Execution of '/usr/sbin/service idmapd start' returned 6: Failed to start idmapd.service: Unit idmapd.service failed to load: No such file or directory.
Wrapped exception:
Execution of '/usr/sbin/service idmapd start' returned 6: Failed to start idmapd.service: Unit idmapd.service failed to load: No such file or directory.
Error: /Stage[main]/Nfs::Client::Service/Service[idmapd]/ensure: change from stopped to running failed: Could not start Service[idmapd]: Execution of '/usr/sbin/service idmapd start' returned 6: Failed to start idmapd.service: Unit idmapd.service failed to load: No such file or directory.
Notice: /Stage[main]/Nfs::Client/Anchor[nfs::client::end]: Dependency Service[idmapd] has failures: true
Warning: /Stage[main]/Nfs::Client/Anchor[nfs::client::end]: Skipping because of failed dependencies

Related info:

SELinux Context flipping

With an nfsv4 export, if the exported folder has an SElinux context set it gets overridden by the file resource in create_export.
eg

Notice: /Stage[main]/Profile::Postgres_cluster/File[/var/lib/pgsql/9.6/pg_archive]/seltype: seltype changed 'usr_t' to 'postgresql_db_t'
Notice: /Stage[main]/Profile::Nfs/Nfs::Server::Export[/var/lib/pgsql/9.6/pg_archive]/Nfs::Functions::Create_export[/export/pg_archive]/File[/export/pg_archive]/seltype: seltype changed 'postgresql_db_t' t
o 'usr_t'

I've temporarily fixed by setting selinux_ignore_defaults to true in that file resource but that is probably too simplistic and needs exposing all the way up ?

Debian - Nobody-Group idmapd issue

By default Nobody-Group for idmapd service is "nobody" group , but in Debian user nobody belongs to "nogroup" group and service is refused to start under "nobody:nobody".
If define options in puppet, for example
class { '::nfs':
client_idmapd_setting => [ 'Nobody-User=nobody', 'Nobody-Group=nogroup' ]
}
and got error
Error: /Stage[main]/Nfs::Client::Config/Augeas[/etc/default/nfs-common]: Could not evaluate: Unknown command Nobody
Notice: /Stage[main]/Nfs::Client/Anchor[nfs::client::end]: Dependency Augeas[/etc/default/nfs-common] has failures: true

NFSv3 and v4 shares

Hi Daniel,

This is more of a question rather than an issue really.

Is there a way to get your nfs module to set up NFS v3 and v4 shares/exports on a single NFS server? So far the only way to achieve that I found was a workaround like this:

  nfs::server::export { '/home':
    ensure  => present,
    clients => '*(sec=krb5:krb5i:krb5p,rw,blah_blah_blah)',
  }
  file_line { '/home NFS 3':
    path  => '/etc/exports',
    line => '/home 10.0.0.0/24(rw,blah_blah_blah),
    notify  => Service[$::nfs::server_service_name],
  }

that works but is far from ideal as with every puppet agent run, your nfs module overwrites /etc/exports, removing all file_line entries and right after that file_line resources modify it adding missing entries again.

Any hints much appreciated!

Cheers,
Adam

Multiple `clients` lines?

I have a dozen or more different client subnet entries that need to be added to the clients entry; instead of the documented string, I'd like to be able to use an array of strings:

clients => [
  '-rw',
  '1.2.3.0/24',
  '1.2.4.0/24',
  # etc.
],

Will this work to concatenate them all?

metadata.json dependencies leads to errors on the commandline

This module is returning errors for our puppet module commands, running Puppet 3.8.2 , even through the files are there:

$ puppet module list >/dev/null
Warning: Missing dependency 'puppetlabs-concat':
  'derdanne-nfs' (v0.0.7) requires 'puppetlabs-concat' (>= 1.1.2)
Warning: Missing dependency 'puppetlabs-stdlib':
  'derdanne-nfs' (v0.0.7) requires 'puppetlabs-stdlib' (>= 4.5.0)

$ puppet module list
??? puppetlabs-concat (v1.2.3)
??? puppetlabs-inifile (v1.4.2)
??? puppetlabs-stdlib (v4.10.0)
$

If I modify nfs/metadata.json and and replace the hyphens with slashes, the error goes away:

Old:

  "dependencies": [
    {"name":"puppetlabs-stdlib","version_requirement":">= 4.5.0"},
    {"name":"puppetlabs-concat","version_requirement":">= 1.1.2"}
  ]   

New:

  "dependencies": [
    {"name":"puppetlabs/stdlib","version_requirement":">= 4.5.0"},
    {"name":"puppetlabs/concat","version_requirement":">= 1.1.2"}
  ]
$ puppet module list >/dev/null
# Nothing here. All is good.
$

The example at https://docs.puppetlabs.com/puppet/latest/reference/modules_metadata.html#specifying-dependencies uses slash characters and not hyphens. I believe that slash characters are preferred, but I'm not 100% positive.

Make package resources compatible with ensure as a version

The current layout of packages in params.pp are arrays of packagenames, if we want to install specific versions with the $server_package_ensure parameter, all packages will be tried to install with the same version.

We need some other solution, maybe we can't define the package names as array in params.pp anymore.

RHEL7 NFS Server, required service rpcbind which was not preinstalled and started

I'm testing using this puppet nfs module to manage a nfs server for me.

When I puppet apply the following test case, I get errors due to rpcbind not pre-existing.
class { '::nfs':
ensure => present,
server_enabled => true,
nfs_v4 => true,
nfs_v4_export_root_clients =>
'10.0.0.0/8(rw,fsid=root,insecure,no_subtree_check,async,no_root_squash)'
}
nfs::server::export{ '/home':
ensure => 'mounted',
clients => '10.0.0.0/8(rw,insecure,no_subtree_check,async,no_root_squash) localhost(rw)'
}

I see the client class covers rpcbind. If I run the following client test before the above server one rpcbind errors are cleared.
class { '::nfs':
server_enabled => false,
client_enabled => true,
nfs_v4 => true,
nfs_v4_idmap_domain => $::domain,
}

Should I configuring rpcbind or use a separate puppet module for it when setting up a server? Please advise.

Arch Linux rpc.idmapd name changed

On Arch Linux, rpc.idmapd service is now called nfs-idmapd

Additionally, idmapd is no longer needed on the NFS client side (or so it appears to me after light research).

options changed 'rbind' to 'rbind' on every puppet-agent run

Hi Daniel,
this is more a question than a productive blocking issue:
On an nfsv4 server ther are notices, that every exported directory gets
options changed 'rbind' to 'rbind'
and then is remounted.
Complete listing for one export:

Notice: /Stage[main]/Main/Node[ftp-repo]/Nfs::Server::Export[/ps/storage/mercury/mercury]/Nfs::Functions::Nfsv4_bindmount[/ps/storage/mercury/mercury]/Mount[/export/mercury]/options: options changed 'rbind' to 'rbind'
Info: /Stage[main]/Main/Node[ftp-repo]/Nfs::Server::Export[/ps/storage/mercury/mercury]/Nfs::Functions::Nfsv4_bindmount[/ps/storage/mercury/mercury]/Mount[/export/mercury]: Scheduling refresh of Mount[/export/mercury]
Info: Mount[/export/mercury](provider=parsed): Remounting
Notice: /Stage[main]/Main/Node[ftp-repo]/Nfs::Server::Export[/ps/storage/mercury/mercury]/Nfs::Functions::Nfsv4_bindmount[/ps/storage/mercury/mercury]/Mount[/export/mercury]: Triggered 'refresh' from 1 events
Info: /Stage[main]/Main/Node[ftp-repo]/Nfs::Server::Export[/ps/storage/mercury/mercury]/Nfs::Functions::Nfsv4_bindmount[/ps/storage/mercury/mercury]/Mount[/export/mercury]: Scheduling refresh of Mount[/export/mercury]

Module
derdanne/nfs v 1.0.1

puppetcode:

   class { '::nfs':
      server_enabled => true,
      nfs_v4         => true,
   }
...
   nfs::server::export{ '/ps/storage/mercury/mercury':
      ensure  => 'mounted',
      clients => 'mercury-*(rw,insecure,no_subtree_check,async,no_root_squash)',
      v4_export_name => "mercury",
   }

fstab (filled automatic):

/ps/storage/mercury/mercury     /export/mercury none    rbind   0       0

System:
centos6

Thank you

Add an option to disable exported resources

When using puppet without a puppet master you might not want to configure storeconfigs, but in this case puppet will print warnings : "Warning: You cannot collect exported resources without storeconfigs being set" for export.pp:145. Could it be possible to add an option allowing to avoid creating exported resources ?

Customize idmapd.conf and other options

Any chance for support to set Local-Realms and Cache-Expiration in /etc/idmapd.conf?
Currently only Domain is supported.
Also customizing GSSDOPTS in defaults config would be appreciated.

Duplicate declaration: Nfs::Client::Mount[/srv] for mount to /srv

I'm getting the following error when trying to set the mountpoint to /srv when storeconfigs_enabled is true

Evaluation Error:
  Error while evaluating a Resource Statement, Evaluation Error:
    Error while evaluating a Resource Statement, Duplicate declaration:
      Nfs::Client::Mount[/srv] is already declared at
        (file: puppet-nfs/manifests/server/config.pp, line: 75);
      cannot redeclare
        (file: puppet-nfs/manifests/server/export.pp, line: 136)
        (file: puppet-nfs/manifests/server/export.pp, line: 136, column: 7)

you can replicate it with something like following:

nfs::server::export{'/some/path':
  ensure => 'mounted',
  mount => '/srv',
}

how do i define mounts via hiera for an nfs client?

im trying to install and mount some nfsv3 mounts client side but cant get it to run. it just doesnt detect that the nfs is applied to my node.

classes:
  - nfs

given the following mount point i'd expect it to mount, but i dont think i have the syntax right.

nfs::client::mount:
  '/opt/software':
    server: '192.168.10.12'
    share: "/mnt/VAULT/Software"
    ensure: mounted
    options: 'rw,nolock,hard,intr'

Your name causes UTF8 problems :)

It appears that the umlaut character in your name causes problems. Puppet manifests are supposed to be ASCII-only confirmed in #puppet by _rc (Richard Clamp)

The error looks something like this:

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: invalid byte sequence in US-ASCII at /path/to/modules/nfs/manifests/params.pp:1 on node cxpxrbx01pmqs01

A work-around is to set HTTPD_LANG to an UTF8 locale in a relevant place (/etc/sysconfig/pe-httpd in my case as were using PE). eg.

HTTPD_LANG=en_GB.UTF8

You might like to consider removing the umlaut from your name in puppet manifest files?

rquotad service support

Would you be interested in a patch to add rpc.rquotad support to support remote quota queries, or is this something you see as not part of the remit of this module?

If so, would you want this as a specific option to enable rquotad or something using a hash of server_services similar to how the client_services hash works?

Happy to provide a patch if that would be useful :)

(Currently I'm just manually adding a service entry to ensure this is running outside of the module)

systemctl needs to have a path

In my agent runs I get:

Failed to apply catalog: Validation of Exec[systemctl daemon-reload] failed: 'systemctl daemon-reload' is not qualified and no path was specified. Please qualify the command or specify a path. at /etc/puppetlabs/code/environments/pidev/modules/nfs/manifests/client/config.pp:56

https://github.com/derdanne/puppet-nfs/blob/4db2627bdeec773bae36576151e399054786302d/manifests/client/config.pp#L56 indeed needs to either say /bin/systemctl daemon-reload or have a path parameter.

RHEL7, nfs-idmap failed to start glitch

When I run the nfs-4 server set install
class { '::nfs':
ensure => present,
server_enabled => true,
nfs_v4 => true,
nfs_v4_export_root_clients =>
'10.0.0.0/8(rw,fsid=root,insecure,no_subtree_check,async,no_root_squash)'
}

I get the below error that nfs-idmap failed to start.
Error: Could not start Service[nfs-idmap]: Execution of '/bin/systemctl start nfs-idmap' returned 1: Job for nfs-idmapd.service failed. See 'systemctl status nfs-idmapd.service' and 'journalctl -xn' for details.
Wrapped exception:
Execution of '/bin/systemctl start nfs-idmap' returned 1: Job for nfs-idmapd.service failed. See 'systemctl status nfs-idmapd.service' and 'journalctl -xn' for details.
Error: /Stage[main]/Nfs::Server::Service/Service[nfs-idmap]/ensure: change from stopped to running failed: Could not start Service[nfs-idmap]: Execution of '/bin/systemctl start nfs-idmap' returned 1: Job for nfs-idmapd.service failed. See 'systemctl status nfs-idmapd.service' and 'journalctl -xn' for details.
Notice: /Stage[main]/Nfs::Server::Service/Service[nfs-idmap]: Triggered 'refresh' from 2 events
Notice: /Stage[main]/Nfs::Server::Service/Service[nfs-server]/ensure: ensure changed 'stopped' to 'running'
Notice: /Stage[main]/Nfs::Server/Anchor[nfs::server::end]: Dependency Service[nfs-idmap] has failures: true
Warning: /Stage[main]/Nfs::Server/Anchor[nfs::server::end]: Skipping because of failed dependencies
Notice: Finished catalog run in 0.49 seconds

On investigation a check of /bin/systemctl status nfs-idmap shows that the service has been started. A rerun of the puppet apply tests/server.pp above run successfully finishing the server config.

I there possibly a timing issue here?

nfs-kernel-server.service appears "stopped"

I've noticed recently that the Puppet agent running on our NFS server nodes seems to think that nfs-kernel-server.service is not running, even though it is. We're seeing this output from the agent each time it runs:

Notice: /Stage[main]/Nfs_server_exports/Service[nfs-kernel-server.service]/ensure: ensure changed 'stopped' to 'running' (corrective) Info: /Stage[main]/Nfs_server_exports/Service[nfs-kernel-server.service]: Unscheduling refresh on Service[nfs-kernel-server.service]

It doesn't look like anything actually happens though (there's no service restart as far as I can see). I should say that this is also happening on our machines with a different, unrelated service. So maybe it's not specific to the puppet-nfs module. We are running Puppet Server 6.3.0, Puppet agent 6.4.0, on Debian 9.9.

This isn't a big problem, just thought it might be worth reporting.

Service[rpcbind.service]/enable: enable changed 'false' to 'true' on every run.

Hello,

On CentOS 7 I get this in every run:
Debug: Executing: '/bin/systemctl is-enabled rpcbind.service' Notice: /Stage[main]/Nfs::Client::Service/Service[rpcbind.service]/enable: enable changed 'false' to 'true'

If i run is-enabled:
myhost:~# systemctl is-enabled rpcbind.service indirect

There are a few related issues:
https://tickets.puppetlabs.com/browse/PUP-7163
https://tickets.puppetlabs.com/browse/PUP-6370
https://tickets.puppetlabs.com/browse/PUP-6759

After reading them I believe enabled for rpcbind should be set to false

can you make nfs4 bindmount optional

Hi

We are using this module for our nfs server, but we are having problems with the bindmount of the to be exported file system. All our filesystems are on a lvm and are being managed by the puppetlabs/lvm module, but this module also mounts this filesystem.
As a workaround a have made a copy of the export.pp and excluded the following:

 nfs::functions::nfsv4_bindmount { $name:
      ensure         => $ensure,
      v4_export_name => $v4_export_name,
      bind           => $bind,
    }

But this is far from optimal. Is it possible to make a boolean for mounting this nfs export?

I hope this is possible.

Thanks

Could not find resource 'Augeas[/etc/idmapd.conf]'

On Puppet 4.10.5 and 5.3.2 with the following parameters I get the message:

Could not find resource 'Augeas[/etc/idmapd.conf]' in parameter 'subscribe'

This happens when I call the nfs class with the following parameters:

class { '::nfs':
  server_enabled         => true,
  client_enabled         => true,
  client_services_enable => true,
  nfs_v4_client          => true,
}

package ensure shouldn't be hardcoded

Right now the packages have ensure => installed. This should change to something configurable eg package => $server_package_ensure, which should also trigger a restart of the relevant services

Questions on Client Mount on v4 using Hiera

Hi Daniel,
This is more of a question rather than an issue really.
Thanks for your great work on this module! :-) I would like to ensure this section is working as expected on a v4? I was trying to use this on the client with Redhat version 6.8.

For the below config the heira is working on the server but not on the client.The heira call on the server is working as expected and working good but not on the client. The binary is getting installed and the configs are getting updated on the client but not able to get the target NFS mount on the client .it looks like an export is missing here.It would be great if can validate the below configs.

class profiles::nfsserver {
node 'nodnfs.posc.com'{
hiera_include('classes')
$nfs_exports_global = hiera_hash('nfs::nfs_exports_global', false)
$defaults_nfs_exports = {
ensure => 'mounted',
clients => '192.168.21.42(rw,insecure,no_subtree_check,async,no_root_squash)',
nfstag => $::fqdn,
}
}
if $nfs_exports_global {
create_resources('::nfs::server::export', $nfs_exports_global, $defaults_nfs_exports)
}

node 'nodnfsclient.posc.com' {
hiera_include('classes')
$nfs_server = hiera('nfs::nfs_server', false)
if $nfs_server {
Nfs::Client::Mount <<| nfstag == $nfs_server |>>
}
}
}

Client hiera setting
classes:

  • nfs
    nfs::client_enabled: true
    nfs::nfs_v4_client: true
    nfs::nfs_v4_idmap_domain: '%{::domain}'
    nfs::nfs_v4_mount_root: '/application'
    nfs::nfs_server: 'nodnfs.posc.com'

server hiera setting

classes:

  • nfs
    nfs::server_enabled: true
    nfs::client_enabled : false
    nfs::nfs_v4: true
    nfs::nfs_v4_idmap_domain: '%{::domain}'
    nfs::nfs_v4_export_root: '/application'
    nfs::nfs_v4_export_root_clients: '192.168.21.42(rw,sync,no_root_squash,no_all_squash)'

Client puppet run...o/p
Notice: Local environment: 'development' doesn't match server specified node environment 'nfs', switching agent to 'nfs'.
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for nodnfsclient.posc.com
Info: Applying configuration version 'a84f0d45f11b9f51c4b52318817c826430384baa'
Notice: /Stage[main]/Nfs::Client::Package/Package[nfs-utils]/ensure: created
Info: /Stage[main]/Nfs::Client::Package/Package[nfs-utils]: Scheduling refresh of Service[rpcbind]
Info: /Stage[main]/Nfs::Client::Package/Package[nfs-utils]: Scheduling refresh of Service[rpcidmapd]
Notice: /Stage[main]/Nfs::Client::Package/Package[nfs4-acl-tools]/ensure: created
Info: /Stage[main]/Nfs::Client::Package/Package[nfs4-acl-tools]: Scheduling refresh of Service[rpcbind]
Info: /Stage[main]/Nfs::Client::Package/Package[nfs4-acl-tools]: Scheduling refresh of Service[rpcidmapd]
Notice: Augeas/etc/idmapd.conf:
--- /etc/idmapd.conf 2017-03-22 15:03:08.000000000 -0700
+++ /etc/idmapd.conf.augnew 2017-04-18 14:48:42.536398520 -0700
@@ -38,6 +38,7 @@

must be included in the list!

#Local-Realms =

+Domain=posc.com
[Mapping]

Nobody-User = nobody

Notice: /Stage[main]/Nfs::Client::Config/Augeas[/etc/idmapd.conf]/returns: executed successfully
Info: /Stage[main]/Nfs::Client::Config/Augeas[/etc/idmapd.conf]: Scheduling refresh of Service[rpcbind]
Info: /Stage[main]/Nfs::Client::Config/Augeas[/etc/idmapd.conf]: Scheduling refresh of Service[rpcidmapd]
Notice: /Stage[main]/Nfs::Client::Service/Service[rpcbind]: Triggered 'refresh' from 3 events
Notice: /Stage[main]/Nfs::Client::Service/Service[rpcidmapd]: Triggered 'refresh' from 3 events
Notice: Applied catalog in 16.25 seconds

Error on upgrading to module version 2.1.2

Hi, we're running into trouble when we upgrade derdanne-nfs from v2.0.10 to v2.1.2 on our Puppet server. Our machines are running Debian 9 (stretch), and our Puppet server and nodes are running Puppet version 6.2.

We've a simple NFS server config set up on a test server; this server is a Puppet node that declares the "nfs" class and gets its small amount of data via Hiera. Everything works ok under v2.0.10. For the record, the Hiera data is as follows:

nfs::server_enabled: true
nfs::client_enabled : false
nfs::nfs_v4: true
nfs::nfs_v4_idmap_domain: 'scss.tcd.ie'
nfs::nfs_v4_export_root: '/share'
nfs::nfs_v4_export_root_clients: 'testclient.scss.tcd.ie(rw,fsid=root,insecure,no_subtree_check,async,no_root_squash)'

However, after upgrading to 2.1.2, the agent run on the test NFS server throws this error:

Error: /Stage[main]/Nfs::Server::Config/Augeas[/etc/idmapd.conf]: Could not evaluate: undefined method 'strip!' for nil:NilClass

Would greatly appreciate any assistance.

Thanks
Stephen ([email protected])

Puppet Stuck Restarting RPCBind.Socket on Every Run

Hi,

I setup an NFS v3 client using the following configuration on CentOS 7.4.1708:

class {'::nfs':
client_enabled => true,
}
nfs::client::mount {$target_directory:
server => $nfs_server,
share => $nfs_share,
remounts => $nfs_remounts,
atboot => $nfs_atboot,
options_nfs => $nfs_options,
}

After Puppet installs the nfs-utils and nfs4-acl-tools packages, it attempts to restart the rpcbind.service and rpcbind.socket services. The rpcbind.service service is restarted successfully, but rpcbind.socket refuses with the following error:

Notice: /Stage[main]/Nfs::Client::Package/Package[nfs-utils]/ensure: created
Info: /Stage[main]/Nfs::Client::Package/Package[nfs-utils]: Scheduling refresh of Service[rpcbind.service]
Info: /Stage[main]/Nfs::Client::Package/Package[nfs-utils]: Scheduling refresh of Service[rpcbind.socket]
Notice: /Stage[main]/Nfs::Client::Package/Package[nfs4-acl-tools]/ensure: created
Info: /Stage[main]/Nfs::Client::Package/Package[nfs4-acl-tools]: Scheduling refresh of Service[rpcbind.service]
Info: /Stage[main]/Nfs::Client::Package/Package[nfs4-acl-tools]: Scheduling refresh of Service[rpcbind.socket]
Notice: /Stage[main]/Nfs::Client::Service/Service[rpcbind.service]: Triggered 'refresh' from 2 events
Error: /Stage[main]/Nfs::Client::Service/Service[rpcbind.socket]: Failed to call refresh: Systemd restart for rpcbind.socket failed!
journalctl log for rpcbind.socket:
-- Logs begin at Wed 2018-05-30 13:20:59 CDT, end at Mon 2018-06-04 14:01:38 CDT. --
Jun 04 14:01:38 systemd[1]: Listening on RPCbind Server Activation Socket.
Jun 04 14:01:38 systemd[1]: Starting RPCbind Server Activation Socket.
Jun 04 14:01:38 systemd[1]: Stopping RPCbind Server Activation Socket.
Jun 04 14:01:38 systemd[1]: Socket service rpcbind.service already active, refusing.
Jun 04 14:01:38 systemd[1]: Failed to listen on RPCbind Server Activation Socket.
Error: /Stage[main]/Nfs::Client::Service/Service[rpcbind.socket]: Systemd restart for rpcbind.socket failed!
journalctl log for rpcbind.socket:
-- Logs begin at Wed 2018-05-30 13:20:59 CDT, end at Mon 2018-06-04 14:01:38 CDT. --
Jun 04 14:01:38 systemd[1]: Listening on RPCbind Server Activation Socket.
Jun 04 14:01:38 systemd[1]: Starting RPCbind Server Activation Socket.
Jun 04 14:01:38 systemd[1]: Stopping RPCbind Server Activation Socket.
Jun 04 14:01:38 systemd[1]: Socket service rpcbind.service already active, refusing.
Jun 04 14:01:38 systemd[1]: Failed to listen on RPCbind Server Activation Socket.
Info: Class[Nfs::Client::Service]: Unscheduling all events on Class[Nfs::Client::Service]
Notice: /Stage[main]/Nfs::Client/Anchor[nfs::client::end]: Dependency Service[rpcbind.socket] has failures: false
Warning: /Stage[main]/Nfs::Client/Anchor[nfs::client::end]: Skipping because of failed dependencies

Subsequent Puppet runs result in the same error trying to refresh the rpcbind services. If I manually stop the rpcbind.service service and run puppet, puppet starts the services as expected and the errrors cease.

However, manually attempting to restart the rpcbind.socket service with systemctl restart rpcbind.socket yields the same error,

Socket service rpcbind.service already active, refusing.

As mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=1546700#c2, it appears this behavior is expected.

when "service rpcbind.service (is) already active", the
"rpcbind.socket" restarting will be refused now. If we stop
"rpcbind.service" by hand and then restart "rpcbind.socket", NO hang
occurs.

However, I'm struggling to get a clean puppet run without manual intervention with this being the case. For what it's worth, I'm running Puppet Enterprise 2017.3.2. Any thoughts or assistance would be greatly appreciated.

Thanks,

CE

Nfs::Client::Mount documentation

In the README, in order to mount an NFSv4 share using hiera on a client, it's specified:

node client {
    hiera_include('classes')
    $nfs_server = hiera('nfs::nfs_server', false)
    
    if $nfs_server {
      Nfs::Client::Mount <<| nfstag == $nfs_server |>>
    }
  }

I have the feeling it should be Nfs::Client::Mount <<| server == $nfs_server |>>, as nfstag is the name of the mount, not the server location.
However, after changing this, the function nfs::client::mount is not getting called.
We can get around this by calling the mount function manually:

node client {
...
    nfs::client::mount { '/data':
        server => 'our-server',
        share => 'data',
    }
}

Is it intended?

Furthermore, using Nfs::Client::Mount::Nfs_v4::Root does not work, either:

Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating an Exported Query, Resource type nfs::client::mount::nfs_v4::root doesn't exist (file: /etc/puppetlabs/code/environments/development/manifests/site.pp, line: 129, column: 3) on node blade-1.tier2

cc @juztas

Ubuntu 18.04 Support

So I have tried this module out on Ubuntu 18.04.

I know it does not officially support 18.04, but I thought it would be good to help get 18.04 supported.

There exists the same issue as in: #57

i.e. that 18.04 uses the nfs-common package instead of idmapd.

I'm going to try hack this into the module locally, and see what other issues come up.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.