Git Product home page Git Product logo

puppet-archive's Introduction

Archive Puppet Module

THIS MODULE IS DEPRECATED

Use puppet-archive instead.

Puppet Forge Version Puppet Forge Downloads Build Status Gemnasium By Camptocamp

Overview

Puppet Module to download and extract tar and zip archives based on camptocamp/puppet-archive.

Supported archive types are:

  • tar.gz, tgz
  • tar.bz2, tbz2
  • tar.xz, txz
  • zip

Features:

  • Ability to follow redirects
  • Supports checksum matching

Usage

Example:

archive { 'apache-tomcat-6.0.26':
  ensure => present,
  url    => 'http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.26/bin/apache-tomcat-6.0.26.tar.gz',
  target => '/opt',
}

You can have archive follow redirects by setting:

follow_redirects => true

The default archive format is tar.gz. To use another supported format you must specify the extenstion:

extension => "zip"

By default archive will try and find a matching checksum file to verify the download. To disable this behavior set the checksum option to false:

checksum => false

You can specify a digest_url, digest_string and digest_type to verify archive integrity.

For .tar.gz and tar.bz2 archives, the extract step's --strip-components=n flag can be accessed. This can be used to change the name of the extracted directory.

strip_components => 1
purge_target => false

By default the target directory is left intact, this option can be used to rm -rf the target directory prior to extraction.

This full example will download the packer tool to /usr/local/bin:

archive { '0.5.1_linux_amd64':
   ensure => present,
   url => 'https://dl.bintray.com/mitchellh/packer/0.5.1_linux_amd64.zip',
   target => '/usr/local/bin',
   follow_redirects => true,
   extension => 'zip',
   checksum => false,
   src_target => '/tmp'
}

You can also specifiy a global user to be used for the whole download and extract operation. Note that the module doesn't handle the right of the specified user on the src_target directory.


archive { '0.5.1_linux_amd64':
   ensure => present,
   url => 'https://dl.bintray.com/mitchellh/packer/0.5.1_linux_amd64.zip',
   target => '/usr/local/bin',
   follow_redirects => true,
   extension => 'zip',
   checksum => false,
   user       => 'camptocamp',
   src_target => '/home/camptocamp'
}

License

Copyright (c) 2012 Camptocamp SA

This script is licensed under the Apache License, Version 2.0.

See http://www.apache.org/licenses/LICENSE-2.0.html for the full license text.

Support

Please log tickets and issues at our project site.

puppet-archive's People

Contributors

achinthagunasekara avatar ckaenzig avatar cornelf avatar dabelenda avatar dfarrell07 avatar gcmalloc avatar hco avatar hdoedens avatar igalic avatar jasperla avatar lampapetrol avatar mcanevet avatar mhamrah avatar pennycoders avatar raphink avatar saimonn avatar scottsuch avatar spredzy avatar wleese avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

puppet-archive's Issues

Archive::Extract does not respect `$extract_dir`

In the file extract.pp

      $extract_zip    = "unzip -o ${src_target}/${name}.${extension} -d ${target}"
      $extract_targz  = "tar --no-same-owner --no-same-permissions --strip-components=${strip_components} -xzf ${src_target}/${name}.${extension} -C ${target}"
      $extract_tarbz2 = "tar --no-same-owner --no-same-permissions --strip-components=${strip_components} -xjf ${src_target}/${name}.${extension} -C ${target}"

These ${target} should be ${extract_dir}

That said: I'm more concerned with how this module isn't consistent with how the parameters should be used. I much prefer having a single $target to extract to because that's conceptually easier to follow.

Doesn't seem to work when the file name doesn't match uri

I have two examples. One which works and another which doesn't. The only difference I can see is that in once case the name of the archive which is downloaded matches the url.

Does work

archive { 'kibana-3.1.2':
    checksum => false,
    ensure => present,
    url    => 'https://download.elasticsearch.org/kibana/kibana/kibana-3.1.2.tar.gz',
    target => '/opt',
  }

Doesn't work ( The file that is downloaded is pguri-master.tar.gz)

  archive { 'pguri-master':
    checksum => false,
    ensure => present,
    url    => 'http://github.com/petere/pguri/archive/master.tar.gz',
    target => '/opt',
  }

using file extension in resource name duplicates file extension.

Ran into an issue with the following code

 45       $packer_basename = inline_template(
 46         "<%= \"#{@prefix}#{@version}_#{scope['::kernel'].downcase}_#{@arch}.zip\" %>"
 47       )
 48
 49       $packer_url = "${packer::params::base_url}${packer_basename}"
 50
 51       # Download the Packer zip archive to the cache.
 52       archive { $packer_basename :
 53         ensure           => present,
 54         url              => "${packer_url}",
 55         target           => $bin_dir,
 56         follow_redirects => true,
 57         extension        => 'zip',
 58         checksum         => false,
 59         src_target       => $cache_dir,
 60       }

it produced this output, where it appended .zip to the file name.

Notice: /Stage[main]/Packer/Archive[packer_0.7.5_linux_amd64.zip]/Archive::Download[packer_0.7.5_linux_amd64.zip.zip]/Exec[download archive packer_0.7.5_linux_amd64.zip.zip and check sum]/returns: executed successfully
Notice: /Stage[main]/Packer/Archive[packer_0.7.5_linux_amd64.zip]/Archive::Extract[packer_0.7.5_linux_amd64.zip]/Exec[packer_0.7.5_linux_amd64.zip unpack]/returns: executed successfully
Notice: /Stage[main]/Main/Node[xagent.vagrant.vm]/Packer::Plugin[post-processor-vagrant-vmware-ovf]/Archive[packer-post-processor-vagrant-vmware-ovf.linux-amd64]/Archive::Extract[packer-post-processor-vagrant-vmware-ovf.linux-amd64]/Exec[packer-post-processor-vagrant-vmware-ovf.linux-amd64 unpack]/returns: executed successfully

vagrant]# ls /tmp/
packer_0.7.5_linux_amd64.zip.zip

It didn't error but the output was unexpected. Am I missing something?

path fact?

I get the following failure when I compile via rspec-puppet:

error during compilation: Validation of Exec[download archive kibana-4.3.1-linux-x64.tar.gz and check sum] failed: 'curl  -s -S   -o /usr/src/kibana-4.3.1-linux-x64.tar.gz 'http://myplace/kibana/4.3.1/kibana-4.3.1-x64.tar.gz'' is not qualified and no path was specified. Please qualify the command or specify a path. at spec/fixtures/modules/archive/manifests/download.pp:171

Meanwhile in a627bc3 archive::download now defaults to a fact or global scope variable $::path?

define archive::download (
...
  $path=$::path,
) {

Anyone know what the thinking is here? ping @raphink

tomcat::source not usable

The default example does not work with puppet 3.2.

1- Puppet archive has Exec commands defined without qualifier, so the archive resource type will not work if is not fixing this first
2- The dependencies are not being filtered in order. The example

tomcat_mirror = "http://archive.apache.org/dist/tomcat/"
$tomcat_version = "6.0.37"

include tomcat::source

fails with

Error: Scope(Class[Tomcat::Juli]): undefined mandatory attribute: $tomcat_home
Error: Scope(Class[Tomcat::Logging]): undefined mandatory attribute: $tomcat_home
Error: Could not find dependency File[undef] for File[/extras/]

Because the includes of Tomcat::Juli and Tomcat::Logging are performed befre $tomcat_home has been declared in Tomcat::source

curl [...] is not qualified and no path was specified

When I try to use the archive module in puppet, i get this log :

Error: Validation of Exec[download archive gitlist.tar.gz and check sum] failed: 'curl -s -S  -L -o /tmp/gitlist.tar.gz https://s3.amazonaws.com/gitlist/gitlist-0.5.0.tar.gz' is not qualified and no path was specified. Please qualify the command or specify a path. at /home/sebastien/backup/modules/archive/manifests/download.pp:146
Wrapped exception:
'curl -s -S  -L -o /tmp/gitlist.tar.gz https://s3.amazonaws.com/gitlist/gitlist-0.5.0.tar.gz' is not qualified and no path was specified. Please qualify the command or specify a path.

URLL is good, curl is installed. I don't see why puppet can't download the archive.

I will look deeper into sources latter inthe week to see if I can find some more informations.

Provider break on OSX

This provider doesn't work on OSX since it requires Package['curl'] as a dependency, even so
the 'curl' program is distributed as part of the base OS install and no special package exists.

Support for puppet://

Hi
Didn't found any information about that. So my question are you planing to support puppet:/// as url?

Archive won't be re-downloaded when md5sum and archive on server is updated

We have an archive resource like this:

   archive { 'oracle_patch':
      ensure        => present,
      url           => 'http://some.server/path/oracle_patch.tgz',
      target        => '/some/path/local',
      digest_type   => 'md5',
      digest_string => $::oracle_patch_md5sum,
      extension     => 'tgz',
      src_target    => '/some/path/local',
      checksum      => true,
      user          => 'oracle',
      timeout       => 600,
    }

$::oracle_patch_md5sum is defined in hiera.

First run everything is fine, archive is downloaded and extracted.
So days later we updated the archive on the webserver and the md5sum in hiera.

The md5sum is created correctly as the new one, but the file isn't downloaded again, because the exec-resource is only firing if no downloaded file is present (see 'creates' below).

      exec {"download archive ${name} and check sum":
        command     => "curl ${proxy_option} -s -S ${insecure_arg} ${redirect_arg} -o ${src_target}/${name} '${url}'",
        creates     => "${src_target}/${name}",
        logoutput   => true,
        timeout     => $timeout,
        path        => $path,
        require     => Package['curl'],
        notify      => $_notify,
        user        => $user,
        refreshonly => $refreshonly,
      }

So the checksum checks went wrong and everything is deleted.
On the second puppet run everything is running fine, since nothing is present on local disk.

Support for specifying owner part 2

Modify mkdir command execution to support execution user and owner.

exec {"$name unpack":
        command => $extension ? {
          'zip'     => "mkdir -p ${target} && ${extract_zip}",
          'tar.gz'  => "mkdir -p ${target} && ${extract_targz}",
          'tgz'     => "mkdir -p ${target} && ${extract_targz}",
          'tar.bz2' => "mkdir -p ${target} && ${extract_tarbz2}",
          'tgz2'    => "mkdir -p ${target} && ${extract_tarbz2}",
          default   => fail ( "Unknown extension value '${extension}'" ),
        },
        creates => $extract_dir,
        timeout => $timeout
      }

Allow authenticated downloads

There doesn't currently appear to be a means of specifying basic auth credentials for downloads; is this something that would be a good addition to the module?

Command doesn't fail on 4xx errors

Trying to download a file from a URL that doesn't exist fails silently and running puppet agent -t appears as if everything works correctly. According to e the cURL man page, if you add a -f to the exec command in download.pp, it will return error code 22 upon most 4xx error codes. Are there implications that I'm not seeing that would prevent something like this from being implemented?

re-download source file

I ran into an issue where the source file was downloaded incorrectly. I specified the wrong URL but curl was able to download the server's response. Subsequent puppet agent runs would not re-download the file.

Is there an option to re-download a file? or delete a file after it has been downloaded?

support purging of target dir

if we "install" a new version of an archive, an unpredictable amount of files may have changed.
in such a case, it may be necessary to purge the target directory prior to extraction.

No qualified path for curl

Getting the error: 'curl is not qualified and no path was specified. Please qualify the command or specify a path. I would love to specify the path though. Maybe a misconfiguration of my puppet env?

Empty command

After upgrading from 0.5.x to 0.6.1 we have
Archive::Extract[liquibase-3.3.2]/Exec[liquibase-3.3.2 unpack]/returns (err): change from notrun to 0 failed: Could not find command ''

File permissions overriden on every puppet run

I need to set particular permissions on an extracted directory. But on every puppet run, archive extract it once again and overide permissions that I try to maintain...

archive { 'gaussian':
    ensure    =>  present,
    url       =>  'http://myfileserver/sources/gaussian_09.tgz',
    target    =>  '/opt',
    extension => "tgz",
    checksum  =>  false,
  }

file { '/opt/gaussian':
    ensure  =>  directory,
    owner   =>  "root",
    group   =>  "users",
    recurse =>  true,
    mode    =>  "0750",
    require => Archive['gaussian'],
}

Replace direct usage of ${name}

Replace direct usage of ${name}. Use some other field to pass the name of archive. Currently scenario where there is need to extract one archive to multiple destination is not possible. Maybe something like:

define archive::extract (
  $archive_name = $title,
  $target,
  $ensure=present,
  $src_target='/usr/src',
  $root_dir='',
  $extension='tar.gz',
  $timeout=120) { ...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.