Git Product home page Git Product logo

voxpupuli / puppet-archive Goto Github PK

View Code? Open in Web Editor NEW
59.0 51.0 174.0 1003 KB

Compressed archive file download and extraction with native types/providers for Windows and Unix

Home Page: https://forge.puppet.com/puppet/archive

License: Apache License 2.0

Ruby 84.55% Puppet 15.45%
linux-puppet-module puppet windows-puppet-module hacktoberfest centos-puppet-module debian-puppet-module oraclelinux-puppet-module redhat-puppet-module scientific-puppet-module sles-puppet-module

puppet-archive's Introduction

Puppet Archive

License Build Status Code Coverage Puppet Forge Puppet Forge - downloads Puppet Forge - endorsement Puppet Forge - scores Camptocamp compatible

Table of Contents

  1. Overview
  2. Module Description
  3. Setup
  4. Usage
  5. Reference
  6. Development

Overview

This module manages download, deployment, and cleanup of archive files.

Module Description

This module uses types and providers to download and manage compress files, with optional lifecycle functionality such as checksum, extraction, and cleanup. The benefits over existing modules such as puppet-staging:

  • Implemented via types and provider instead of exec resource.
  • Follows 302 redirect and propagate download failure.
  • Optional checksum verification of archive files.
  • Automatic dependency to parent directory.
  • Support Windows file extraction via 7zip or PowerShell (Zip file only).
  • Able to cleanup archive files after extraction.

This module is compatible with camptocamp/archive. For this it provides compatibility shims.

Setup

On Windows 7zip is required to extract all archives except zip files which will be extracted with PowerShell if 7zip is not available (requires System.IO.Compression.FileSystem/Windows 2012+). Windows clients can install 7zip via include 'archive'. On posix systems, curl is the default provider. The default provider can be overwritten by configuring resource defaults in site.pp:

Archive {
  provider => 'ruby',
}

Users of the module are responsible for archive package dependencies, for alternative providers and all extraction utilities such as tar, gunzip, bunzip:

if $facts['osfamily'] != 'windows' {
  package { 'wget':
    ensure => present,
  }

  package { 'bunzip':
    ensure => present,
  }

  Archive {
    provider => 'wget',
    require  => Package['wget', 'bunzip'],
  }
}

Usage

Archive module dependencies are managed by the archive class. This is only required on Windows. By default 7zip is installed via chocolatey, but the MSI package can be installed instead:

class { 'archive':
  seven_zip_name     => '7-Zip 9.20 (x64 edition)',
  seven_zip_source   => 'C:/Windows/Temp/7z920-x64.msi',
  seven_zip_provider => 'windows',
}

To automatically load archives as part of this class you can define the archives parameter.

class { 'archive':
  archives => { '/tmp/jta-1.1.jar' => {
                  'ensure' => 'present',
                  'source'  => 'http://central.maven.org/maven2/javax/transaction/jta/1.1/jta-1.1.jar',
                  }, }
}

Usage Example

Simple example that downloads from web server:

archive { '/tmp/vagrant.deb':
  ensure => present,
  source => 'https://releases.hashicorp.com/vagrant/2.2.3/vagrant_2.2.3_x86_64.deb',
  user   => 0,
  group  => 0,
}

More complex example :

include 'archive' # NOTE: optional for posix platforms

archive { '/tmp/jta-1.1.jar':
  ensure        => present,
  extract       => true,
  extract_path  => '/tmp',
  source        => 'http://central.maven.org/maven2/javax/transaction/jta/1.1/jta-1.1.jar',
  checksum      => '2ca09f0b36ca7d71b762e14ea2ff09d5eac57558',
  checksum_type => sha1,
  creates       => '/tmp/javax',
  cleanup       => true,
}

archive { '/tmp/test100k.db':
  source   => 'ftp://ftp.otenet.gr/test100k.db',
  username => 'speedtest',
  password => 'speedtest',
}

If you want to extract a .tar.gz file:

$install_path        = '/opt/wso2'
$package_name        = 'wso2esb'
$package_ensure      = '4.9.0'
$repository_url      = 'http://company.com/repository/wso2'
$archive_name        = "${package_name}-${package_ensure}.tgz"
$wso2_package_source = "${repository_url}/${archive_name}"

archive { $archive_name:
  path         => "/tmp/${archive_name}",
  source       => $wso2_package_source,
  extract      => true,
  extract_path => $install_path,
  creates      => "${install_path}/${package_name}-${package_ensure}",
  cleanup      => true,
  require      => File['wso2_appdir'],
}

Puppet URL

Since march 2017, the Archive type also supports puppet URLs. Here is an example of how to use this:

archive { '/home/myuser/help':
  source        => 'puppet:///modules/profile/help.tar.gz',
  extract       => true,
  extract_path  => $homedir,
  creates       => "${homedir}/help" #directory inside tgz
}

File permission

When extracting files as non-root user, either ensure the target directory exists with the appropriate permission (see tomcat.pp for full working example):

$dirname = 'apache-tomcat-9.0.0.M3'
$filename = "${dirname}.zip"
$install_path = "/opt/${dirname}"

file { $install_path:
  ensure => directory,
  owner  => 'tomcat',
  group  => 'tomcat',
  mode   => '0755',
}

archive { $filename:
  path          => "/tmp/${filename}",
  source        => 'http://www-eu.apache.org/dist/tomcat/tomcat-9/v9.0.0.M3/bin/apache-tomcat-9.0.0.M3.zip',
  checksum      => 'f2aaf16f5e421b97513c502c03c117fab6569076',
  checksum_type => sha1,
  extract       => true,
  extract_path  => '/opt',
  creates       => "${install_path}/bin",
  cleanup       => true,
  user          => 'tomcat',
  group         => 'tomcat',
  require       => File[$install_path],
}

or use an subscribing exec to chmod the directory afterwards:

$dirname = 'apache-tomcat-9.0.0.M3'
$filename = "${dirname}.zip"
$install_path = "/opt/${dirname}"

file { '/opt/tomcat':
  ensure => 'link',
  target => $install_path
}

archive { $filename:
  path          => "/tmp/${filename}",
  source        => "http://www-eu.apache.org/dist/tomcat/tomcat-9/v9.0.0.M3/bin/apache-tomcat-9.0.0.M3.zip",
  checksum      => 'f2aaf16f5e421b97513c502c03c117fab6569076',
  checksum_type => sha1,
  extract       => true,
  extract_path  => '/opt',
  creates       => "${install_path}/bin",
  cleanup       => true,
  require       => File[$install_path],
}

exec { 'tomcat permission':
  command   => "chown tomcat:tomcat $install_path",
  path      => $path,
  subscribe => Archive[$filename],
}

Network files

For large binary files that needs to be extracted locally, instead of copying the file from the network fileshare, simply set the file path to be the same as the source and archive will use the network file location:

archive { '/nfs/repo/software.zip':
  source        => '/nfs/repo/software.zip'
  extract       => true,
  extract_path  => '/opt',
  checksum_type => none,   # typically unecessary
  cleanup       => false,  # keep the file on the server
}

Extract Customization

The extract_flags or extract_command parameters can be used to override the default extraction command/flag (defaults are specified in achive.rb).

# tar striping directories:
archive { '/var/lib/kafka/kafka_2.10-0.8.2.1.tgz':
  ensure          => present,
  extract         => true,
  extract_command => 'tar xfz %s --strip-components=1',
  extract_path    => '/opt/kafka_2.10-0.8.2.1',
  cleanup         => true,
  creates         => '/opt/kafka_2.10-0.8.2.1/config',
}

# zip freshen existing files (zip -of %s instead of zip -o %s):
archive { '/var/lib/example.zip':
  extract       => true,
  extract_path  => '/opt',
  extract_flags => '-of',
  cleanup       => true,
  subscribe     => ...,
}

S3 bucket

S3 support is implemented via the AWS CLI. On non-Windows systems, the archive class will install this dependency when the aws_cli_install parameter is set to true:

class { 'archive':
  aws_cli_install => true,
}

# See AWS cli guide for credential and configuration settings:
# http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
file { '/root/.aws/credentials':
  ensure => file,
  ...
}
file { '/root/.aws/config':
  ensure => file,
  ...
}

archive { '/tmp/gravatar.png':
  ensure => present,
  source => 's3://bodecoio/gravatar.png',
}

NOTE: Alternative s3 provider support can be implemented by overriding the s3_download method:

GS bucket

GSUtil support is implemented via the GSUtil Package. On non-Windows systems, the archive class will install this dependency when the gsutil_install parameter is set to true:

class { 'archive':
  gsutil_install => true,
}

# See Google Cloud SDK cli guide for credential and configuration settings:
# https://cloud.google.com/storage/docs/quickstart-gsutil

archive { '/tmp/gravatar.png':
  ensure => present,
  source => 'gs://bodecoio/gravatar.png',
}

Passing headers

Sometimes headers need to be passed to source. This can be accomplished using headers parameter:

archive { '/tmp/slack-desktop-4.28.184-amd64.deb':
  ensure        => present,
  extract       => true,
  extract_path  => '/tmp',
  source        => 'https://downloads.slack-edge.com/releases/linux/4.28.184/prod/x64/slack-desktop-4.28.184-amd64.deb',
  checksum      => 'e5d63dc6bd112d40c97f210af4c5f66444d4d5e8',
  checksum_type => sha1,
  headers       => ['Authorization: OAuth ABC123']
  creates       => '/usr/local/bin/slack',
  cleanup       => true,
}

Download customizations

In some cases you may need custom flags for curl/wget/s3/gsutil which can be supplied via download_options. Since this parameter is provider specific, beware of the order of defaults:

  • s3:// files accepts aws cli options

    archive { '/tmp/gravatar.png':
      ensure           => present,
      source           => 's3://bodecoio/gravatar.png',
      download_options => ['--region', 'eu-central-1'],
    }
  • puppet provider override:

    archive { '/tmp/jta-1.1.jar':
      ensure           => present,
      source           => 'http://central.maven.org/maven2/javax/transaction/jta/1.1/jta-1.1.jar',
      provider         => 'wget',
      download_options => '--continue',
    }
  • Linux default provider is curl, and Windows default is ruby (no effect).

This option can also be applied globally to address issues for specific OS:

if $facts['osfamily'] != 'RedHat' {
  Archive {
    download_options => '--tlsv1',
  }
}

Migrating from puppet-staging

It is recommended to use puppet-archive instead of puppet-staging. Users wishing to migrate may find the following examples useful.

puppet-staging (without extraction)

class { 'staging':
  path  => '/tmp/staging',
}

staging::file { 'master.zip':
  source => 'https://github.com/voxpupuli/puppet-archive/archive/master.zip',
}

puppet-archive (without extraction)

archive { '/tmp/staging/master.zip':
  source => 'https://github.com/voxpupuli/puppet-archive/archive/master.zip',
}

puppet-staging (with zip file extraction)

class { 'staging':
  path  => '/tmp/staging',
}

staging::file { 'master.zip':
  source  => 'https://github.com/voxpupuli/puppet-archive/archive/master.zip',
} ->
staging::extract { 'master.zip':
  target  => '/tmp/staging/master.zip',
  creates => '/tmp/staging/puppet-archive-master',
}

puppet-archive (with zip file extraction)

archive { '/tmp/staging/master.zip':
  source       => 'https://github.com/voxpupuli/puppet-archive/archive/master.zip',
  extract      => true,
  extract_path => '/tmp/staging',
  creates      => '/tmp/staging/puppet-archive-master',
  cleanup      => false,
}

Reference

Classes

  • archive: install 7zip package (Windows only) and aws cli or gsutil for s3/gs support. It also permits passing an archives argument to generate archive resources.
  • archive::staging: install package dependencies and creates staging directory for backwards compatibility. Use the archive class instead if you do not need the staging directory.

Define Resources

  • archive::artifactory: archive wrapper for JFrog Artifactory files with checksum.
  • archive::go: archive wrapper for GO Continuous Delivery files with checksum.
  • archive::nexus: archive wrapper for Sonatype Nexus files with checksum.
  • archive::download: archive wrapper and compatibility shim for camptocamp/archive. This is considered private API, as it has to change with camptocamp/archive. For this reason it will remain undocumented, and removed when no longer needed . We suggest not using it directly. Instead please consider migrating to archive itself where possible.

Resources

Archive

  • ensure: whether archive file should be present/absent (default: present)
  • path: namevar, archive file fully qualified file path.
  • filename: archive file name (derived from path).
  • source: archive file source, supports http|https|ftp|file|s3|gs uri.
  • headers: array of headers to pass source, like an authentication token
  • username: username to download source file.
  • password: password to download source file.
  • allow_insecure: Ignore HTTPS certificate errors (true|false). (default: false)
  • cookie: archive file download cookie.
  • checksum_type: archive file checksum type (none|md5|sha1|sha2|sha256|sha384| sha512). (default: none)
  • checksum: archive file checksum (match checksum_type)
  • checksum_url: archive file checksum source (instead of specify checksum)
  • checksum_verify: whether checksum will be verified (true|false). (default: true)
  • extract: whether archive will be extracted after download (true|false). (default: false)
  • extract_path: target folder path to extract archive.
  • extract_command: custom extraction command ('tar xvf example.tar.gz'), also support sprintf format ('tar xvf %s') which will be processed with the filename: sprintf('tar xvf %s', filename)
  • temp_dir: Specify an alternative temporary directory to use for copying files, if unset then the operating system default will be used.
  • extract_flags: custom extraction options, this replaces the default flags. A string such as 'xvf' for a tar file would replace the default xf flag. A hash is useful when custom flags are needed for different platforms. {'tar' => 'xzf', '7z' => 'x -aot'}.
  • user: extract command user (using this option will configure the archive file permission to 0644 so the user can read the file).
  • group: extract command group (using this option will configure the archive file permission to 0644 so the user can read the file).
  • cleanup: whether archive file will be removed after extraction (true|false). (default: true)
  • creates: if file/directory exists, will not download/extract archive. If extract and cleanup are both true, this should be set to prevent Puppet from re-downloading and re-extracting the archive every run.
  • proxy_server: specify a proxy server, with port number if needed. ie: https://example.com:8080.
  • proxy_type: proxy server type (none|http|https|ftp)

Archive::Artifactory

  • path: fully qualified filepath for the download the file or use archive_path and only supply filename. (namevar).
  • ensure: ensure the file is present/absent.
  • url: artifactory download url filepath. NOTE: replaces server, port, url_path parameters.
  • server: artifactory server name (deprecated).
  • port: artifactory server port (deprecated).
  • url_path: artifactory file path http:://{server}:{port}/artifactory/{url_path} (deprecated).
  • owner: file owner (see archive params for defaults).
  • group: file group (see archive params for defaults).
  • mode: file mode (see archive params for defaults).
  • archive_path: the parent directory of local filepath.
  • extract: whether to extract the files (true/false).
  • creates: the file created when the archive is extracted (true/false).
  • cleanup: remove archive file after file extraction (true/false).
  • headers: array of headers to pass source

Archive::Artifactory Example

  • retrieve gradle without authentication

    $dirname = 'gradle-1.0-milestone-4-20110723151213+0300'
    $filename = "${dirname}-bin.zip"
    
    archive::artifactory { $filename:
      archive_path => '/tmp',
      url          => "http://repo.jfrog.org/artifactory/distributions/org/gradle/${filename}",
      extract      => true,
      extract_path => '/opt',
      creates      => "/opt/${dirname}",
      cleanup      => true,
    }
    
    file { '/opt/gradle':
      ensure => link,
      target => "/opt/${dirname}",
    }
  • retrieve gradle with api token:

    $dirname = 'gradle-1.0-milestone-4-20110723151213+0300'
    $filename = "${dirname}-bin.zip"
    
    archive::artifactory { $filename:
      archive_path => '/tmp',
      url          => "http://repo.jfrog.org/artifactory/distributions/org/gradle/${filename}",
      headers      => ['X-JFrog-Art-Api: ABC123'],
      extract      => true,
      extract_path => '/opt',
      creates      => "/opt/${dirname}",
      cleanup      => true,
    }
    
    file { '/opt/gradle':
      ensure => link,
      target => "/opt/${dirname}",
    }
  • setup resource defaults

    $artifactory_authentication = lookup('jfrog_token')
    
    Archive::Artifactory {
      headers => ["X-JFrog-Art-Api: ${artifactory_authentication}"],
    }

Archive::Nexus

Archive::Nexus Example

archive::nexus { '/tmp/jtstand-ui-0.98.jar':
  url        => 'https://oss.sonatype.org',
  gav        => 'org.codehaus.jtstand:jtstand-ui:0.98',
  repository => 'codehaus-releases',
  packaging  => 'jar',
  extract    => false,
}

Development

We highly welcome new contributions to this module, especially those that include documentation, and rspec tests ;) but will happily guide you through the process, so, yes, please submit that pull request!

Note: If you are writing a dependent module that include specs in it, you will need to set the puppetversion fact in your puppet-rspec tests. You can do that by adding it to the default facts of your spec/spec_helper.rb:

RSpec.configure do |c|
  c.default_facts = { :puppetversion => Puppet.version }
end

puppet-archive's People

Contributors

adamcrews avatar aerostitch avatar alexcit avatar alexjfisher avatar bastelfreak avatar benningm avatar dan33l avatar dhoppe avatar ekohl avatar genebean avatar ghoneycutt avatar hajee avatar hunner avatar igalic avatar j0sh3rs avatar jairojunior avatar juniorsysadmin avatar jyaworski avatar kenyon avatar nanliu avatar nibalizer avatar prolixalias avatar qs5779 avatar rnelson0 avatar robinbowes avatar root-expert avatar smortex avatar sprankle avatar tragiccode avatar zilchms avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

puppet-archive's Issues

0.5.0 Roadmap

#55 identified a memory usage problem for archive module. Our goal for 0.5.0 is to resolve this problem and address the following issue.

  • default ruby provider should use net::http with streaming (see #124).
  • faraday should no longer be a requirement for puppet agents #60. (for now remain a requirement for puppet master for the define types/functions see #124)
  • faraday can be used as an alternative provider to support migration.
  • linux system can use alternative default providers #61.
  • migrate to puppet community travis config. #82

Since this requires several changes, master will be unstable until we resolve them. Please feel free to discuss or comment on this issue if you have any input.

Make faraday package install optional

Or support installation of different package names with other providers (eg. pe-rubygem-faraday using the default yum provider on EL platforms).

Our puppet masters don't have internet access and so can't install gems. We generally package gems into RPMs using fpm and install them from an internal repo.

I'll try and send in a Pr to implement these changes, if I get chance next week

Follow redirect using curl

I needed curl to follow redirects, with -L parameter, i.e.:

curl http://google.com.br/ -o google.html

gets the wrong file, while:

curl http://google.com.br/ -L -o google.html

gets the actual index file.

Do you agree that the curl provider should work this way, following redirects? I made the changes on my fork but I want to know what do you think about it before making a pull request.

Extraction fails when user/group set with extract_path no permissions

When attempting to use the Archive resource to extract a zip file into /usr/local/ with a user/group value set, and extract_path set to a directory where the user/group doesn't have permissions, the extract fails (silently - raised as #107).

Could possibly work-around this by chowning the files after extraction, rather than trying to run the extraction as the user/group.

Related to #22.

7zip is unable to locate archive on Windows Server2012

When creating the command for 7zip its possible for the windows path to have spaces which causes 7zip to parse anything after a space as a separate command line option. The path needs to be wrapped in double quotes when executing the extract command.

Remove the dependency on Faraday

For our use case it is difficult to get an extra gem installed. At the moment, this means we cannot use the module. I would like to use at least the types (and providers) in this module, but they are 'hard wired' using faraday.

I've looked at the code and it seems you are using faraday mostly as an easy way to call NET:HTTP. I could try and make a PR removing Faraday from the provider. Is this beneficial? Am I missing something in the reasoning for Faraday?

Query string breaks file:// behavior

Query string is currently hardcoded in url_path, but it breaks file:// behavior cause '?' (question mark) is a valid character for file names.

I managed to work in a simple fix, but I think each implementation (HTTP, FTP, File) should receive an URI and handle its own internal behavior (include or not a query string). What do you think?

We'll also need tests for this...

puppet_gem provider broken on Fedora

Fedora packages do not install Puppet 4 in /opt but in standard OS paths, and rely on the OS Ruby instead of letting Puppet ship its own Ruby.

As a result the puppet_gem provider set in params.pp will not work because the gem executable path set upstream does not exist.

I open this issue as a proposal to send a PR which fixes this. If supporting Fedora is not desired feel free to close the issue.

ensure directory exists before moving archive

When moving the archive after download we should create the directory structure so that the mv function does not fail.

https://github.com/puppet-community/puppet-archive/blob/0.3.0/lib/puppet/provider/archive/default.rb#L66

This can be fixed by inserting the following before the line above FileUtils.mkdir_p(File.dirname(archive_filepath))

I would issue a PR but since the next version is under a major refactor I was not sure if we wanted to create a 0.3.1 version.

Invalid package provider 'pe_gem'

I am seeing this error on a freshly installed PE 3.7. Is there supposed to be a pe_gem package provider shipped with PE? I was not able to find one on the system.

Failed to apply catalog: Parameter provider failed on Package[faraday]: Invalid package provider 'pe_gem' at /etc/puppetlabs/puppet/modules/archive/manifests/init.pp:6 Wrapped exception: Invalid package provider 'pe_gem' Wrapped exception: Invalid package provider 'pe_gem'

Extraction failures are supressed

When attempting to use the Archive resource to download and unzip a zip file, any failures in the extract command [1] are suppressed, thereby making it impossible to diagnose any issues.

Modifying this line to the following provides the command output allowing for debugging.

          Puppet::Util::Execution.execute(cmd, :uid => opts[:uid], :gid => opts[:gid], :failonfail => true, :squelch => false, :combine => true)

Will attempt to get a PR raised and submitted shortly for review/comment...

[1] https://github.com/puppet-community/puppet-archive/blob/master/lib/puppet_x/bodeco/archive.rb#L51

Publish module to Puppet Forge

Currently this module is not searchable via the Puppetforge API and causes tools like Librarian-Puppet a R10k to pull down the module nanliu/archive

Module not published on forge.

It seems that this module has releases (presumably before migration to puppet-community), however it does not show on the forge. Of particular note this is impacting use of modules depending on it such as puppet-community's 'rundeck' module.

embed gems and install locally

Not sure if anybody else can appreciate doing this, but my current situation limits me from installing anything directly off the internet. So to make things easier I ended up putting the gems inside the module and installing locally.

If this kind of addition is welcomed, I'll make a PR for it.

But it looks something like:

  $archive::params::path               = 'C:/Windows/Temp'  # for context only
  $faraday_source            = "${archive::params::path}/faraday-0.9.1.gem"
  $faraday_middleware_source = "${archive::params::path}/faraday_middleware-0.10.0.gem"
  $multipart_post_source     = "${archive::params::path}/multipart-post-2.0.0.gem"

  file{$faraday_source:
    ensure => present,
    source => 'puppet:///modules/archive/faraday-0.9.1.gem',
    before => Package['faraday']
  }
  file{$faraday_middleware_source:
    ensure => present,
    source => 'puppet:///modules/archive/faraday_middleware-0.10.0.gem',
    before => Package['faraday_middleware']
  }
  file{$multipart_post_source:
    ensure => present,
    source => 'puppet:///modules/archive/multipart-post-2.0.0.gem',
    before => Package['multipart-post']
  }
  package{ 'multipart-post':
    ensure   => present,
    provider => $archive::params::gem_provider,
    install_options => '--local',
    source   => $multipart_post_source,
  }
  package { 'faraday':
    ensure   => present,
    provider => $archive::params::gem_provider,
    install_options => '--local',
    source   => $faraday_source,
    require  => Package['multipart-post'],
  }
  package { 'faraday_middleware':
    ensure   => present,
    provider => $archive::params::gem_provider,
    install_options => '--local',
    source   => $faraday_middleware_source,
    require  => Package['faraday'],
  }

Module fails when puppetlabs-stdlib is also in the modulepath

When this module is installed alongside puppetlabs-stdlib, the package provider is wrongly resolved to pe_gem:

/etc/puppet/modules
├── nanliu-archive (v0.1.6)
└── puppetlabs-stdlib (v4.3.2)
$ sudo puppet apply -e 'include archive'
Notice: Compiled catalog for toni.local in environment production in 0.63 seconds
Error: Parameter provider failed on Package[faraday]: Invalid package provider 'pe_gem' at /etc/puppet/modules/archive/manifests/init.pp:6
Wrapped exception:
Invalid package provider 'pe_gem'
Wrapped exception:
Invalid package provider 'pe_gem'

I tested it on CentOS 6 and 7 with Puppet 3.7.3. It might very well be a Puppet bug, but since I didn't have time to investigate it yet I file an issue here so that you are aware of the matter.

file urls do not work on windows

Because windows has to be different...

The URI ruby library does not work as expected on windows because of the fact there is no root drive. So files fail to download because the urls are interpreted incorrectly.

Example:

irb(main):006:0> URI('file://d:/OSS/one/two/three.zip').path
=> "/OSS/one/two/three.zip"

And adding another slash produces another issue

irb(main):007:0> URI('file:///d:/OSS/one/two/three.zip').path
=> "/d:/OSS/one/two/three.zip"

This should be d:/OSS/one/two/three.zip

extract_flags will not work with tar

With the tar command, the filename must be immediately after the f flag. Setting the extract_flags parameter will put flags between the "f" and the filename in the command, causing it to fail.

Windows fails to load type

I'm seeing the following error using verison 0.5.0 on a Windows machine.

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not autoload puppet/type/archive: Could not autoload puppet/provider/archive/curl: Could not find parent provider ruby of curl on node

Trying a simple example like such:

include '::archive'
archive { 'consul_0.6.4_windows_amd64.zip':
  source => 'https://releases.hashicorp.com/consul/0.6.4/consul_0.6.4_windows_amd64.zip',
}

archive::params class has an invalid expression under puppet 4

Under puppet 4.2.1, this expression
https://github.com/puppet-community/puppet-archive/blob/609206790bc5cbd35f1affb41d552c97301ce4be/manifests/params.pp#L22 generates an error:

Evaluation Error: Left match operand must result in a String value. Got an Undef Value. at .../archive/manifests/params.pp:22:6 on node leo

I'm not a PE user so I'm unsure of what the correct puppet 4 way to detect PE is. Also, I believe under 4 pe_gem has been deprecated in favor of the puppet_gem provider.

Bug: duplicate resource due to aliasing/namevar having filepath stripped off

Hi,

I'm using archive to copy the same file into multiple different locations, however I've recieving the following error:

==> : Error: Cannot alias Archive[/tmp/result2/result] to ["result"] at /tmp/vagrant-puppet/manifests-846018e2aa141a5eb79a64b4015fc5f3/manifest.pp:25; resource ["Archive", "result"] already declared at /tmp/vagrant-puppet/manifests-846018e2aa141a5eb79a64b4015fc5f3/manifest.pp:18

Here's the code to reproduce it:

$source_file = '/tmp/source'

file { $source_file:
  ensure  => file,
  content => 'this is a test'
}

file { ['/tmp/result1', '/tmp/result2']: ensure => directory }

archive { "/tmp/result1/result":
  ensure  => present,
  name    => "/tmp/result1/result",
  source  => "file://${source_file}",
  extract => false,
  require => [File[$source_file], File[['/tmp/result1', '/tmp/result2']]]
}

archive { "/tmp/result2/result":
  ensure  => present,
  name    => "/tmp/result2/result",
  source  => "file://${source_file}",
  extract => false,
  require => [File[$source_file], File[['/tmp/result1', '/tmp/result2']]]
}

I've got a working solution which I hope will be acceptable. I'll fork and submit a pull request.

Thanks,
Josh

Support for puppet://

Hi
I could not find any information about supported protocols, and from the code it seems that the is just ftp and http supported right now. Are you planing to add support for puppet:// "protocol"?

Thanks a lot
br

Method HTTP.follow_redirect works only with Ruby 1.9+

In file lib/puppet_x/bodeco/util.rb, method HTTP.follow_redirect contains this code:

Net::HTTP.start(uri.host, uri.port, :use_ssl => (uri.scheme == 'https')) do |http|
# ...

But this syntax (keyword arguments) is supported only by Ruby 2.x, right? Also, I think this option (use_ssl) doesn't exist in previous versions of the class Net::HTTP.start (as you can see here: http://ruby-doc.org/stdlib-1.8.7/libdoc/net/http/rdoc/Net/HTTP.html)

I tested without :use_ssl parameter and it worked well for me (I tested with archive::nexus, using http, and archive, using https), but I don't know if this change could impact.

latest version has syntax error

One of the latest commits to default.rb (between 1.8 and 2.0) added a syntax error somewhere.

system: aws centos 6.4 image using packer 0.7.5 within a teamcity build.
reverting to archive 1.8 works fine.

amazon-ebs:  [1;31mError: Could not autoload puppet/provider/archive/default: /etc/puppet/modules/archive/lib/puppet/provider/archive/default.rb:108: syntax error, unexpected ')' [0m
[05:29:18][Step 2/2]  [0;32m    amazon-ebs: /etc/puppet/modules/archive/lib/puppet/provider/archive/default.rb:115: syntax error, unexpected $end, expecting kEND [0m [0m
[05:29:18][Step 2/2]  [0;32m    amazon-ebs:  [1;31mError: Could not autoload puppet/type/archive: Could not autoload puppet/provider/archive/default: /etc/puppet/modules/archive/lib/puppet/provider/archive/default.rb:108: syntax error, unexpected ')' [0m
[05:29:18][Step 2/2]  [0;32m    amazon-ebs: /etc/puppet/modules/archive/lib/puppet/provider/archive/default.rb:115: syntax error, unexpected $end, expecting kEND [0m [0m
[05:29:18][Step 2/2]  [0;32m    amazon-ebs:  [1;31mError: Could not autoload puppet/type/archive: Could not autoload puppet/provider/archive/default: /etc/puppet/modules/archive/lib/puppet/provider/archive/default.rb:108: syntax error, unexpected ')' [0m
[05:29:18][Step 2/2]  [0;32m    amazon-ebs: /etc/puppet/modules/archive/lib/puppet/provider/archive/default.rb:115: syntax error, unexpected $end, expecting kEND on node ip-10-203-10-11..com [0m [0m
[05:29:18][Step 2/2]  [0;32m    amazon-ebs:  [1;31mError: Could not autoload puppet/type/archive: Could not autoload puppet/provider/archive/default: /etc/puppet/modules/archive/lib/puppet/provider/archive/default.rb:108: syntax error, unexpected ')' [0m
[05:29:18][Step 2/2]  [0;32m    amazon-ebs: /etc/puppet/modules/archive/lib/puppet/provider/archive/default.rb:115: syntax error, unexpected $end, expecting kEND on node ip-10-203-10-11..com [0m [0m

Could not find parent provider ruby of curl

I'm using this module in another module and testing using TravisCI. TravisCI is throwing the following error when testing on Puppet 4.x. I don't know what this error means or why it only appears when testing on 4.x.

   Failure/Error:
       Puppet::Type.type(:archive).provide(:curl, :parent => :ruby) do
         commands :curl => 'curl'
         defaultfor :feature => :posix

         def download(archive_filepath)
           tempfile = Tempfile.new(tempfile_name)
           temppath = tempfile.path
           tempfile.close!

           @curl_params = [

     Puppet::PreformattedError:
       Evaluation Error: Error while evaluating a '=>' expression, Could not autoload puppet/type/archive: Could not autoload puppet/provider/archive/curl: Could not find parent provider ruby of curl 

Support owner / group file permissions

In nanliu-staging, it was possible to set the owner and group when deploying an archive. I couldn't find similar support in nanliu-archive. Could this support be added?

Do not overwrite destination while downloading

Currently the destination file becomes size 0 bytes while the download is in progress. Please consider using a temp file for downloading and only overwrite the destination when the download was successful and checksum matches.

This would suit our scenario where we download a war and overwrite a previous version in the webapps directory.

Sonatype Nexus support (archive::nexus)

  • You can Query Nexus HTTP API to download an artifact, e.g.:

https://oss.sonatype.org/service/local/artifact/maven/content?g=io.hawt&a=hawtio-web&v=1.4.36&p=war&r=releases

  • And with a little trick you can also download md5 or sha1:

https://oss.sonatype.org/service/local/artifact/maven/content?g=io.hawt&a=hawtio-web&v=1.4.36&p=war.md5&r=releases (p=war.md5 or war.sha1)

I was playing around trying to create a simple implementation for this using archive: https://github.com/jairojunior/puppet-archive/blob/master/manifests/nexus.pp

But Bodeco.Util.download doesn't support query strings. Any chances to add support for query string in this module? Or it's not intended to this (could break something or any other reason that I can't see)?

I'm really interested to add this support to archive, so any help would be appreciated.

Out of memory while downloading large files

While trying to download a large .zip file (1.3GB) in a test VM with 512MB of RAM the only output I got from the puppet run was Killed. I increased the VM RAM incrementally and got Error: Could not run: failed to allocate memory until I had increased the RAM to 2GB, i.e. the file could be stored in RAM as a whole until flushed to disk.

Is this a known unfixable issue? Or an issue with the provider implementation? Or is there some workaround that can be applied?

checksum_url not working for me

I get error when I utilize the checksum_url parameter.

My code:

  archive { "<file_name>":
    ensure        => present,
    source        => "$fileSourceURL",
    checksum_type => 'md5',
    checksum_url  => '$fileSourceChecksumURL',
    cleanup       => false,
  }

Error message:

Error: /Stage[main]/[module_name]/Archive[[file_path]]: Could not evaluate: undefined method `content' for #<PuppetX::Bodeco::FTP:0x000000025843f8>
/var/opt/lib/pe-puppet/lib/puppet_x/bodeco/util.rb:15:in `content'
/<module_path>/archive/lib/puppet/provider/archive/ruby.rb:71:in `remote_checksum'

/var/opt/lib/pe-puppet/lib/puppet_x/bodeco/util.rb line 15
@connection.content(uri)

/<module_path>/archive/lib/puppet/provider/archive/ruby.rb  line 71
@remote_checksum ||= PuppetX::Bodeco::Util.content(resource[:checksum_url], :username => resource[:username], :password => resource[:password], :cookie => resource[:cookie])

My checksum_url = ftp:////filename.txt

Checksum file content:

<checksum>  <file_name>

Configuration:
Puppet 3.8.1 on RHEL 6
puppetlabs-pe_gem (v0.1.1)
puppetlabs-stdlib (v4.6.0)

  "dependencies": [
    {
      "name": "puppetlabs/stdlib",
      "version_requirement": ">= 2.2.1"
    },
    {
      "name": "puppetlabs/pe_gem",
      "version_requirement": ">= 0.0.1"
    }
  ]

Am I missing a dependency that I am not aware of?

autoload puppet/provider/archive/curl

Hello,

I get this error randomly on my clients and if I re-run puppet sometimes it goes away and works without any problems. Any idea why it randomly occurs ?

Puppet: 3.8.2
Module; 0.4.4

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Error while evaluating a Resource Statement, Could not autoload puppet/type/archive: Could not autoload puppet/provider/archive/curl: Could not find parent provider ruby of curl at /etc/puppetlabs/code/environments/develop/modules/archive/manifests/nexus.pp:53:3 at /etc/puppetlabs/code/environments/develop/modules/ui_broker/manifests/init.pp:33 on node rnode01.test.com
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run

Cannot download using wget

When wget is used my download fail. It seems that the server is not receiving a proper cookie header. I checked my puppet agent log and I saw this:

Debug: Executing '/usr/bin/wget http://download.oracle.com/otn-pub/java/jdk/7u80-b15/jdk-7u80-linux-x64.tar.gz -O /tmp/jdk-7u80-linux-x64.tar.gz_20160316-9551-fy9sql --max-redirect=5 --header="Cookie: "oraclelicense=accept-securebackup-cookie"'

If I run this manually in my shell, I got a wrong download. As you can see cookie header double quotes are wrong.

I tried to change wget.rb from this (three double quotes on Cookie):
params += optional_switch(resource[:cookie], ['--header="Cookie: "%s"'])

To this (two double quotes on Cookie):
params += optional_switch(resource[:cookie], ['--header="Cookie: %s"'])

web server is still answering like it is receiving a wrong cookie but in this case If I run the command printed by the log in my shell manually then it works.

Unable to obtain file via archive::nexus with Puppet 4

Just to note this works correctly on Puppet 3.x.

This seems to have something to do with the faraday_middleware and the checksum that the archive::nexus calls for the archive type.

If you comment out the checksum_url and checksum_type parameters your able to successfully obtain the file from a nexus store.

I've checked and the two variables for urls are generating valid urls that I can manually download both the file and the checksum (md5) file.

$artifact_url = assemble_nexus_url($url, delete_undef_values($query_params))
$checksum_url = regsubst($artifact_url, "p=${packaging}", "p=${packaging}.${checksum_type}")

On a vagrant instance with Puppet 4 I have the following gems installed

# gem list

*** LOCAL GEMS ***

faraday (0.9.1)
faraday_middleware (0.10.0)
json (1.8.3)
multipart-post (2.0.0)

On a host with Puppet 3 installed the only difference seems to be the version of the json gem is 1.5.5, but i've even downgraded that gem and still get the same issue.

puppet source

Does this module support a puppet:/// source? If not, is it planned?
This source works in nanliu-staging.

I tried

    archive { "/usr/java/jdk-${jdk_version}-linux-x64.tar.gz":
        ensure          => present,
        extract         => true,
        extract_path    => '/usr/java',
        source          => "puppet:///files/java/jdk-${jdk_version}-linux-x64.tar.gz",
        creates         => "${jdk_dir}",
        cleanup         => true,
    }

Got this error:

Error: Failed to apply catalog: Parameter source failed on Archive[/usr/java/jdk-7u55-linux-x64.tar.gz]: invalid source url: puppet:///files/java/jdk-7u55-linux-x64.tar.gz at /etc/puppetlabs/puppet/manifests/boomi-jdk2.pp:67
Wrapped exception:
invalid source url: puppet:///files/java/jdk-7u55-linux-x64.tar.gz
Wrapped exception:
invalid source url: puppet:///files/java/jdk-7u55-linux-x64.tar.gz

Why force puppetversion fact everywhere in the archive spec?

Hi,

I was wondering why you force the puppetversion fact in spec/classes/archive_spec.rb even for the opensource version of puppet.

It looks a bit weird to test against different versions but to force the version fact in the tests no?

:puppetversion => '3.7.3'

I discovered that because when writing a module that depends on puppet-archive and wrote rspec tests without forcing this fact. In this case, the tests fail with the following error:

error during compilation: Undefined variable "::puppetversion"; Undefined variable "puppetversion" at /home/travis/build/tubemogul/puppet-aerospike/spec/fixtures/modules/archive/manifests/params.pp:22 on node testing-worker-linux-docker-212279d8-3372-linux-8.prod.travis-ci.org

Thanks for your help,
Joseph

allow source to accept absolute file paths

I have been noticing that I constantly have to convert /tmp/some/file.txt to file:///tmp/some/file.txt and would rather just have the type code do this by munging the value.

Any objection to adding support for the source parameter to accept absolute file paths?

Unknown variable aio-agent breaks multi version puppet env

Master - 2.2.1-1
Agent - 3.8.2

When using a newer master and an older agent the fact 'aio-agent' does not exist causing agent runs to fail, due to the puppetserver expects the agent to have the fact 'aio-agent' which the older clients do not.

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Unknown variable: '::aio_agent_version'. at /etc/puppetlabs/code/environments/develop/modules/archive/manifests/params.pp:26:11 at /etc/puppetlabs/code/environments/develop/modules/ui_broker/manifests/init.pp:33 on node test.node.com.
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.