Git Product home page Git Product logo

puppetlabs / bolt Goto Github PK

View Code? Open in Web Editor NEW
486.0 113.0 218.0 13.09 MB

Bolt is an open source orchestration tool that automates the manual work it takes to maintain your infrastructure on an as-needed basis or as part of a greater orchestration workflow. It can be installed on your local workstation and connects directly to remote nodes with SSH or WinRM, so you are not required to install any agent software.

Home Page: https://puppet.com/docs/bolt/latest/bolt.html

License: Apache License 2.0

Ruby 96.95% Shell 1.20% Puppet 0.56% Dockerfile 0.06% PowerShell 0.84% HTML 0.38%
ssh winrm puppet bolt devops orchestration

bolt's Introduction

Modules Status Linux Status Windows Status Version Platforms License

bolt logo

Bolt is an open source orchestration tool that automates the manual work it takes to maintain your infrastructure. Use Bolt to automate tasks that you perform on an as-needed basis or as part of a greater orchestration workflow. For example, you can use Bolt to patch and update systems, troubleshoot servers, deploy applications, or stop and restart services. Bolt can be installed on your local workstation and connects directly to remote targets with SSH or WinRM, so you are not required to install any agent software.

Bring order to the chaos with orchestration

Run simple plans to rid yourself of the headaches of orchestrating complex workflows. Create and share Bolt plans to easily expand across your application stack.

Use what you have to automate simple tasks or complex workflows

Get going with your existing scripts and plans, including YAML, PowerShell, Bash, Python or Ruby, or reuse content from the Puppet Forge.

Get up and running with Bolt even faster

Speed up your Bolt knowledge with a step-by-step introduction to basic Bolt functionality with our getting started guide and self-paced training.

More information and documentation is available on the Bolt website.

Supported platforms

Bolt can be installed on Linux, Windows, and macOS. For complete installation details, see the installation docs.

For alternate installation methods and running from source code, see our contributing guidelines.

Getting help

Join #bolt on the Puppet Community slack to chat with Bolt developers and the community.

Contributing

We welcome error reports and pull requests to Bolt. See our contributing guidelines for how to help.

Kudos

Thank you to Marcin Bunsch for allowing Puppet to use the bolt gem name.

License

Bolt is available as open source under the terms of the Apache 2.0 license.

bolt's People

Contributors

adreyer avatar artlawson avatar barriserloth avatar beechtom avatar caseywilliams avatar donoghuc avatar ekinanp avatar garethr avatar glennsarti avatar hestonhoffman avatar hlindberg avatar iristyle avatar joshcooper avatar jpogran avatar k8med avatar katelopresti avatar knorx avatar lucywyman avatar mcdonaldseanp avatar mikaelsmith avatar mruzicka avatar nicklewis avatar nmaludy avatar op-ct avatar pcarlisle avatar puppetlabs-jenkins avatar reidmv avatar steveax avatar thallgren avatar zreichert avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bolt's Issues

bolt gem doesn't ship facts module

When installing Bolt as a gem (gem install bolt) it doesn't contain the facts module.
The same version installed through rpm (puppet-bolt.x86_64 0:0.21.7-1.el7) does contain the module.

If this is intentionally it would be good to update the docs to make the end user aware this module can be found on the forge.

The current documentation implies the facts module is shipped by default (Writing plans - collects facts from the targets) and this might cause some confusion when trying out the examples.

list nodes and groups

To allow convenient inspection of group/node definitions bolt should support a "list" and "show" option for groups and nodes.

$ bolt group list
mygroup-01
mygroup-02
mygroup-03
$ bolt group show mygroup-02
foo-001,foo-002,foo-003
$ bolt group show mygroup-02 --with-spaces
foo-001 foo-002 foo-003
$ bolt group show mygroup-02 --with-newlines
foo-001
foo-002
foo-003

$ bolt group show all --with-newlines
foo-001
foo-002
foo-003
...
foo-400

This is a precondition to add host and group completion for shell completions like "./resources/bolt_bash_completion.sh"
See also: #495

puppetfile install fails to remove last module

If you remove all modules from Puppetfile and run bolt puppetfile install you are informed that the modules are being sync'd however no modules are purged from ~/.puppetlabs/bolt/modules/

Steps to reproduce:

Set ~/.puppetlabs/bolt/Puppetfile to:

mod 'puppetlabs-stdlib'

Run bolt puppetfile install

Observe the module is now installed by running puppet module list --modulepath /home/ubuntu/.puppetlabs/bolt/modules:

/home/ubuntu/.puppetlabs/bolt/modules
└── puppetlabs-stdlib (v5.1.0)

Remove the stdlib line from Puppetfile

Run bolt puppetfile install

Observe the module is still installed puppet module list --modulepath /home/ubuntu/.puppetlabs/bolt/modules:

/home/ubuntu/.puppetlabs/bolt/modules
└── puppetlabs-stdlib (v5.1.0)

Expected behaviour

When the Puppetfile lists no modules bolt puppetfile install should remove all modules.

Actual behaviour

No modules are removed when the Puppetfile is empty.

Bolt should allow setting default connection type (ssh, winrm)

Bolt should allow setting default connection type (ssh, winrm, etc.), through a "config" file and/or command line option. It should not be necessary to prefix all windows hosts with winrm:// when you are only connecting to windows hosts. It should still allow a user to "override" the default connection type by specifying it on the node list.

Sudo Credentials leaked if alerting TargetSpec object

When alerting in plan if passing just the TargetSpec object it will leak credentials to the console.

plan pe::wait_for_service(
  Targetspec $nodes,
  Integer $port,
  Integer $timeout,
  )
{
  $nodes.each |$n| {
     alert("connection restored to ${n}")
  }
}

connection restored to Target('puppet-compile-master-cloud-va2-2.pm.company.com', {"connect-timeout"=>10, "tty"=>false, "host-key-check"=>false, "run-as"=>"root", "sudo-password"=>"Woops"}) moving on

cannot load such file -- addressable

I was setting up bolt and I got this error:

C:/Ruby23-x64/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:127:in `require': cannot load such file -- addressable (LoadError)
        from C:/Ruby23-x64/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:127:in `rescue in require'
        from C:/Ruby23-x64/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:40:in `require'
        from C:/Ruby23-x64/lib/ruby/gems/2.3.0/gems/bolt-0.9.0/lib/bolt/node_uri.rb:1:in `<top (required)>'
        from C:/Ruby23-x64/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require'
        from C:/Ruby23-x64/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require'
        from C:/Ruby23-x64/lib/ruby/gems/2.3.0/gems/bolt-0.9.0/lib/bolt/node.rb:2:in `<top (required)>'
        from C:/Ruby23-x64/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require'
        from C:/Ruby23-x64/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require'
        from C:/Ruby23-x64/lib/ruby/gems/2.3.0/gems/bolt-0.9.0/lib/bolt.rb:3:in `<module:Bolt>'
        from C:/Ruby23-x64/lib/ruby/gems/2.3.0/gems/bolt-0.9.0/lib/bolt.rb:1:in `<top (required)>'
        from C:/Ruby23-x64/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:127:in `require'
        from C:/Ruby23-x64/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:127:in `rescue in require'
        from C:/Ruby23-x64/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:40:in `require'
        from C:/Ruby23-x64/lib/ruby/gems/2.3.0/gems/bolt-0.9.0/exe/bolt:3:in `<top (required)>'
        from C:/Ruby23-x64/bin/bolt:22:in `load'
        from C:/Ruby23-x64/bin/bolt:22:in `<main>'

Looking up the documentation for the addressable gem, I found this: http://www.rubydoc.info/gems/addressable

I changed the top line of node_uri.rb from require 'addressable' to require 'addressable/uri' and then everything worked.

I don't know if anyone else had this issue, but maybe someone with more access than me can update that if necessary. :)

feature request: bolt interactive mode

Currently it is not possible to use bolt for interactive ssh remote sessions.

Example:

# continuously analyze the io behavior of a number of remote hosts
bolt command run "vim /etc/puppetlabs/puppet/puppet.conf; puppet agent --test" -n host1,host2,host3

# perform distribution updates
bolt command run "do-release-upgrade && reboot" -n host1,host2,host3

Comparable to https://github.com/scoopex/hostctl (-l, -p) it woule be nice to have three additional feature-options:

  • --login: just invoke a login to the remote machine in single threaded mode with direct stdin/stdou/stderr
  • --interactive : execute the specified script/command in single threaded mode with direct stdin/stdou/stderr
  • --prompt: can be used in combination with --interactive, invoke the command and ask for (c)ontinue to next host, (r)etry command, (s)hell invocation, (q)uit after the execution

Does inventoryfile support subgroups?

Can I use sub-groups in the inventory file? The documentation isn't clear on if there is only one level supported, or if you can nest these.

Lets say I currently have:

groups:
  - name: sandbox
    nodes:
      - sandbox1.example.com
      - sandbox2.example.com
      - sandbox3.example.com
      - sandbox4.example.com
      - sandbox5.example.com
      - sandbox6.example.com
  - name: production
    nodes:
      - prod1.example.com

Could I do something like this:

groups:
  - name: sandbox
    group:
      - name: app
        nodes:
           - sandbox1.example.com
           - sandbox2.example.com
      - name: web_server
        nodes:
           - sandbox3.example.com
           - sandbox4.example.com
     - name: db
       nodes:
          - sandbox5.example.com
          - sandbox6.example.com
  - name: production
    nodes: 
      - prod1.example.com

I know the obvious answer is "rename your hostnames", but let's assume that's not an option.

Puppet Integration

This is more of a general question. It seems like there should be more integration to puppet in Bolt.

It seems like it would be useful to use bolt to augment puppet modules. It would be great to see support for using the agent TLS authentication for running arbitrary tasks if you already have a traditional puppet architecture. It would also be good to see the integration of Bolt tasks as a Puppet function. I would see this as similar to custom functions or facts.

Something like a <module name>/bolt/bolt_task that you could execute during your puppet run to exec on the client and return the output as a variable.

I see the lack of this type of behavior in Puppet to be a big gap and something that pushes people to something like Ansible.

Apologies for asking this here if it's not the right forum. I'm curious if these type of features are on a future roadmap.

feature request: prompt after execution

Inspired by https://github.com/scoopex/hostctl it might be a good thing to have a interactive possibility to check and rework a execution.

This might look like this:

$ bolt command run "ls -l " --nodes foobar.net
Switching teporarily to concurrency of 1
Started on foobar.net.
Finished on foobar.net:
  STDERR:
ls: cannot access '/tmp/lalalal': No such file or directory
Ran on 1 node in 0.77 seconds

Failed on foobar.net:
  The command failed with exit code 1
(c)ontinue, (a)ll, (r)etry, (s)hell, (q)uit
  • continue : execute next host
  • retry: re-execute current host (provide same interactive options after that)
  • shell: open a regular shell on the host to examine the situation (provide same interactive options after closing the shell)
  • quit: stop execution loop
  • all: execute all further hosts without interaction in the defined concurrency

This might help to perform semi-automated automations like distrubution-upgrades or other invasive tasks.

Enhancement: Run Context to support Executable Inventory

I would like to use a dynamic inventory that I can generated based on the state of my current instances, whether vagrant, google cloud, azure, or aws. In these scenario, I would want to auto-generate a configuration and use it.

This can be supported by doing stat on file, find if it is executable, and execute it. If it is not, then it assumes normal format.

bolt command run $commmand --inventoryfile ec2.rb  --nodes stage-web-01

Bolt should use a configuration inheritance model similar to the one used by 'git'

When running bolt I expected the configuration options to be stacked in a method similar to those of git since some options are repository specific and some are user specific and should not be in a shared repository.

I did not expect the local configuration to override my global configuration completely (though it is documented at https://puppet.com/docs/bolt/0.x/configuring_bolt.html and I simply missed it).

Config inheritance order (from lowest to highest)
On non-Windows systems, the default config paths searched are:

  1. /etc/puppetlabs/bolt/bolt.yaml
  2. ~/.puppetlabs/etc/bolt/bolt.yaml
  3. <boltdir>/bolt.yaml

On Windows systems, the default config paths searched are:

  1. C:\Program Files\Puppet Labs\Bolt\bolt.yaml
  2. %USERPROFILE%/.puppetlabs/etc/bolt/bolt.yaml
  3. <boltdir>/bolt.yaml

TODO:

  • Verify where Puppet looks for equivalent files on Windows
    Global: C:\ProgramData\PuppetLabs\bolt\etc
    Local: C:\Users\<user>\.puppetlabs\etc\bolt
    
  • Spike on how to merge configs. Any config options that's a hash should be evaluated for how it should be merged, and an RFC should be proposed to the Bolt team.
    https://gist.github.com/beechtom/f7b758893796d61a23c8ab868cf8b062
  • This should include a debug log message about what configs are being loaded and override precedence.

feature request: possibility to use hostgroups

Compareable to the ansible inventory files, it would be great to support external collections of hosts.
(in addition to "--node" a "--group " option)

Ideally bolt should support to load group definition by:

  • file
  • url
  • puppetdb

Groups might be based on facts of puppetdb or manual definitions (a simple file/datastream which defines hostnames in groups).

run_script plan function fails with 'wrong number of arguments (2 for 3)'

➜  bolt --version
0.7.0
➜  cat ./modules/tasks_demo/plans/puppet.pp 
plan tasks_demo::puppet(
  String $nodes,
  String $service_action = 'status',
  String $service_name = 'puppet',
) {

  $nodes_array = split($nodes, ',')

  run_script('tasks_demo/install_puppet_5_agent.sh', $nodes_array)
  #run_command('/opt/puppetlabs/bin/puppet --version', $nodes_array)
  #run_task('service', $nodes_array, action => $service_action, name => $service_name)

}
➜  bolt plan run tasks_demo::puppet nodes=127.0.0.1:32768,127.0.0.1:32769,127.0.0.1:32770 --modulepath ./modules -u root -p root
Error: Evaluation Error: Error while evaluating a Function Call, wrong number of arguments (2 for 3) at /Users/ktreese/workspace/puppet/tasks/modules/tasks_demo/plans/puppet.pp:9:3 on node macbook
Exiting because of an error in Puppet code
➜   
➜  ls -l ./modules/tasks_demo/                    
total 0
drwxr-xr-x  4 reesek  1575811233  136 Nov  9 16:08 files
drwxr-xr-x  6 reesek  1575811233  204 Nov  9 16:08 plans
drwxr-xr-x  6 reesek  1575811233  204 Nov  8 11:14 tasks
➜  ls -l ./modules/tasks_demo/files 
total 40
-rw-r--r--  1 reesek  1575811233   2889 Nov  9 16:08 bashcheck
-rw-r--r--  1 reesek  1575811233  14233 Nov  9 15:40 install_puppet_5_agent.sh

Documentation indicates two positional arguments.

SSH disconnect hangs when using `ProxyCommand`

Environment

  • Ubuntu 16.04
  • Ruby: 2.5.1p57 (from ubuntu package)
  • Bolt: 0.24.0 (from ubuntu package)

Observation

When trying to run a task on Server A the output stop after printing Started on $HOSTNAME..., no output is printed and bolt does not return to the shell.
The same command works on Server B, it prints Finished on $HOSTNAME, the stdout of the task and the final report.
The only difference between Server A and B is the fact, that Server A uses the follwoing SSH config

ProxyCommand ssh $PROXY netcat -w 120 %h %p
User root

The same behaviour as Server A can be reproduced on more server using a ProxyCommand.

Executing bolt with --debug show that the command is actually executed:

# [..]
Started on $HOSTNAME
Running task $TASK
Opened session
Executing: mkdir -m 700 /tmp/9268c9c1-4087-4e89-a56a-859e0c280884
Command returned successfully
Executing: chmod u\+x /tmp/9268c9c1-4087-4e89-a56a-859e0c280884/init.rb
Command returned successfully
Executing: /tmp/9268c9c1-4087-4e89-a56a-859e0c280884/init.rb
stdout: {"status":"running","enabled":"true"}
Command returned successfully
Executing: rm -rf /tmp/9268c9c1-4087-4e89-a56a-859e0c280884
Command returned successfully
# never finishes

Browsing the source a little bit, it seems that the closing of the underlying SSH connection/session is the culprit.

Expected behaviour

Server A works like Server B

Using multiple modules directories

I keep modules in two different sub directories: site for mine code and modules for modules downloaded from Forge. As far as Puppet goes it can use both directories to look for modules. With Bolt, trying to pass them in modulepath-way (as --modules ./site:./modules) or with two parameters (--modules ./site --modules ./modules) ends in either my plan not being found, or dependencies not being met.

Using Bolt tasks with PCP fails

Command:

bundle exec bolt task run profile::healthcheck target='ubuntu1404a.pdx.puppet.vm' port='80' -n pcp://ubuntu1404a.pdx.puappet.vm --modules=/etc/puppetlabs/code/environments/production/site

Creates the following error:

2017-10-05T14:59:34.830262  ERROR ubuntu1404a.pdx.puappet.vm: "/etc/puppetlabs/code/environments/production/site/profile/tasks/healthcheck.rb" is not a valid task name. (OrchestratorClient::ApiError::ValidationError)
/root/git/bolt/.bundle/ruby/2.4.0/gems/orchestrator_client-0.2.1/lib/orchestrator_client.rb:59:in `post'
/root/git/bolt/.bundle/ruby/2.4.0/gems/orchestrator_client-0.2.1/lib/orchestrator_client/command.rb:9:in `task'
/root/git/bolt/.bundle/ruby/2.4.0/gems/orchestrator_client-0.2.1/lib/orchestrator_client/job.rb:28:in `start'
/root/git/bolt/.bundle/ruby/2.4.0/gems/orchestrator_client-0.2.1/lib/orchestrator_client.rb:79:in `run_task'
/root/git/bolt/lib/bolt/node/orch.rb:28:in `_run_task'
/root/git/bolt/lib/bolt/node.rb:72:in `run_task'
/root/git/bolt/lib/bolt/executor.rb:52:in `block in run_task'
/root/git/bolt/lib/bolt/executor.rb:23:in `block (2 levels) in on_each'
/root/git/bolt/.bundle/ruby/2.4.0/gems/concurrent-ruby-1.0.5/lib/concurrent/executor/ruby_thread_pool_executor.rb:348:in `run_task'
/root/git/bolt/.bundle/ruby/2.4.0/gems/concurrent-ruby-1.0.5/lib/concurrent/executor/ruby_thread_pool_executor.rb:337:in `block (3 levels) in create_worker'
/root/git/bolt/.bundle/ruby/2.4.0/gems/concurrent-ruby-1.0.5/lib/concurrent/executor/ruby_thread_pool_executor.rb:320:in `loop'
/root/git/bolt/.bundle/ruby/2.4.0/gems/concurrent-ruby-1.0.5/lib/concurrent/executor/ruby_thread_pool_executor.rb:320:in `block (2 levels) in create_worker'
/root/git/bolt/.bundle/ruby/2.4.0/gems/concurrent-ruby-1.0.5/lib/concurrent/executor/ruby_thread_pool_executor.rb:319:in `catch'
/root/git/bolt/.bundle/ruby/2.4.0/gems/concurrent-ruby-1.0.5/lib/concurrent/executor/ruby_thread_pool_executor.rb:319:in `block in create_worker'
/root/git/bolt/.bundle/ruby/2.4.0/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in `block in create_with_logging_context'
ubuntu1404a.pdx.puappet.vm:

"/etc/puppetlabs/code/environments/production/site/profile/tasks/healthcheck.rb" is not a valid task name.

Ran on 1 node in 0.26 seconds

profile::healthcheck is a task that does work via the PE GUI

Need Passwordless Sudo Option

Currently there is no option to use sudo w/o a password at least according to bolt --help usage. This is a common pattern in Linux for organizations that authenticate using a private key based for use with automation.

Recommend having both --sudo and --sudo-password [PASSWORD].

Garbled output when running task with --tty

I have a task atask that looks like this and that I run against a CentOS 7 system

#! /bin/bash

cat <<EOF
{ "name": "cat",
EOF
sleep 1  # Do real, slow work here
cat <<EOF
  "legs": 4 }
EOF

When I run this task with bolt task run atask without passing --tty, everything is fine: the output returned is proper JSON. When I run the task with the --tty flag, a spurious {"_task": "atask"} is inserted into the output as if sleep had produced that output, making the result not legal JSON. This behavior seems timing dependent: removing the sleep, or reducing it, to sleep 0.05 makes things work again, even with the --tty flag.

I've tried this with bolt version 0.22.0.

Streaming stdout and stderr

When running a script or command remotely, Bolt should stream the output (stdout and stderr) back in near real time. For a script that takes a long time to complete, it is frustrating to wait for script outpuy.

allow custom loggers

I would like to use the Bolt gem in Choria but I do not want Bolt to make its own logger, I'd like to be able to pass in a Logger type for it to log via that would have the same interface as the ruby logger, this way I can capture bolt output completely

Bolt should support asynchronous communications plugins

To work at scale, Bolt needs to be able to process asynchronous commands.

As a test/example API, I would recommend setting up a file backend that uses an input and output FIFO pipe to get the kinks worked out of the send/receive system.

I feel that the following should be done:

  1. Support independent backend configuration files in...say.../etc/puppetlabs/bolt/backend/<backend_name>.yaml
  • I chose YAML since I can add comments and it's built into native Ruby
  1. There should be a consistent API for setting up any backend service.
  • Since the backend services are independent, only the basic capabilities should be made generic. Perhaps even just Backend.new and Backend.run
  1. Lack of a response to the Bolt CLI should not be considered a failure, for large systems it is quite possible that the system will output logs and results to other systems altogether

Allow user/password entry for WinRM in bolt.yaml

It'd be a nice add-in to support specifying the username/password for WinRM transport mode in the bolt.yaml config file. Instead of passing the --user "${user}" --password "${password}" each time needed to execute WInRM commands/tasks.

Perhaps allow the variables within the code block in the bolt.yaml file:

winrm:
  ssl: false
  user: <username>
  password: <password>
  connect-timeout: 30

Defaults would be specified parameters located in bolt.yaml, unless overridden by manually passed params on the CLI.

bolt-inventory-pdb: GemNotFoundException

Running on Mac OS X Mojave macOS 10.14 (18A391), installed via brew cask install puppetlabs/puppet/puppet-bolt

vdc@ZL-10546 ~ $ bolt
Usage: bolt <subcommand> <action> [options]

Available subcommands:
  bolt command run <command>       Run a command remotely
  bolt file upload <src> <dest>    Upload a local file
  bolt script run <script>         Upload a local script and run it remotely
  bolt task show                   Show list of available tasks
  bolt task show <task>            Show documentation for task
  bolt task run <task> [params]    Run a Puppet task
  bolt plan show                   Show list of available plans
  bolt plan show <plan>            Show details for plan
  bolt plan run <plan> [params]    Run a Puppet task plan
  bolt puppetfile install          Install modules from a Puppetfile into a Boltdir

Run `bolt <subcommand> --help` to view specific examples.

where [options] are:
    -n, --nodes NODES                Identifies the nodes to target.
                                     Enter a comma-separated list of node URIs or group names.
                                     Or read a node list from an input file '@<file>' or stdin '-'.
                                     Example: --nodes localhost,node_group,ssh://nix.com:23,winrm://windows.puppet.com
                                     URI format is [protocol://]host[:port]
                                     SSH is the default protocol; may be ssh, winrm, pcp, local
                                     For Windows nodes, specify the winrm:// protocol if it has not be configured
                                     For SSH, port defaults to `22`
                                     For WinRM, port defaults to `5985` or `5986` based on the --[no-]ssl setting
    -q, --query QUERY                Query PuppetDB to determine the targets
        --noop                       Execute a task that supports it in noop mode
        --description DESCRIPTION    Description to use for the job
        --params PARAMETERS          Parameters to a task or plan as json, a json file '@<file>', or on stdin '-'
Authentication:
    -u, --user USER                  User to authenticate as
    -p, --password [PASSWORD]        Password to authenticate with. Omit the value to prompt for the password.
        --private-key KEY            Private ssh key to authenticate with
        --[no-]host-key-check        Check host keys with SSH
        --[no-]ssl                   Use SSL with WinRM
        --[no-]ssl-verify            Verify remote host SSL certificate with WinRM
Escalation:
        --run-as USER                User to run as using privilege escalation
        --sudo-password [PASSWORD]   Password for privilege escalation. Omit the value to prompt for the password.
Run context:
    -c, --concurrency CONCURRENCY    Maximum number of simultaneous connections (default: 100)
        --compile-concurrency CONCURRENCY
                                     Maximum number of simultaneous manifest block compiles (default: number of cores)
        --modulepath MODULES         List of directories containing modules, separated by ':'
        --boltdir FILEPATH           Specify what Boltdir to load config from (default: autodiscovered from current working dir)
        --configfile FILEPATH        Specify where to load config from (default: ~/.puppetlabs/bolt/bolt.yaml)
        --inventoryfile FILEPATH     Specify where to load inventory from (default: ~/.puppetlabs/bolt/inventory.yaml)
Transports:
        --transport TRANSPORT        Specify a default transport: ssh, winrm, pcp, local
        --connect-timeout TIMEOUT    Connection timeout (defaults vary)
        --[no-]tty                   Request a pseudo TTY on nodes that support it
        --tmpdir DIR                 The directory to upload and execute temporary files on the target
Display:
        --format FORMAT              Output format to use: human or json
        --[no-]color                 Whether to show output in color
    -h, --help                       Display help
        --verbose                    Display verbose logging
        --debug                      Display debug logging
        --trace                      Display error stack traces
        --version                    Display the version
vdc@ZL-10546 ~ $ bolt --version
1.1.0
vdc@ZL-10546 ~ $ bolt-inventory-pdb
Traceback (most recent call last):
	2: from /opt/puppetlabs/bolt/bin/bolt-inventory-pdb:23:in `<main>'
	1: from /opt/puppetlabs/bolt/lib/ruby/2.5.0/rubygems.rb:308:in `activate_bin_path'
/opt/puppetlabs/bolt/lib/ruby/2.5.0/rubygems.rb:289:in `find_spec_for_exe': can't find gem bolt (>= 0.a) with executable bolt-inventory-pdb (Gem::GemNotFoundException)

WinRM over HTTPS

I cannot seem to connect to my remote machines using winrm /w https. I assume this is not currently implemented. The below are the commands i have tried.

bolt command run 'get-childitem' --nodes winrm://ZZZZZ.XXXX.com:5986
bolt command run 'get-childitem' --nodes winrm://ZZZZZ.XXXX.com

Feature-request: Preview / read-only run mode for Tasks and Tasks Plans

As a user of bolt, I would really like the ability to be able to "preview" a run without actually executing the specified script. This process would validate that my node list is valid, all of the specified hostnames / IPs resolve as expected, connections ((PXP or ssh) to these nodes are successful, that script file(s) can be transferred successfully, etc. It would also output the specific hostname and IP resolved to help me troubleshoot issues with DNS / etc/hosts discrepancies.

The output would return information about each stage of the process (connections successful, script file(s) transferred, etc.) but instead of running the actual script, it just runs a dummy command (e.g. echo to STDOUT) in place of the actual script. There's been several occasions where I've wanted to test a very specific set of command parameters to bolt against specific servers and pre-validate everything before running but the actual script I'm trying to run is not idempotent, so I can only run it once. (I realize I could just create a dummy script to accomplish this same goal but I think having a flag where you can turn this on and off would be a nice feature)

Additionally, having something similar for Task plans where it provides a "noop" / "readonly" ability would greatly improve the usability of Bolt for me.

Thanks!

Joel

Bolt silently fails on all commands

Bolt is silently failing on all commands, even with verbose and debug logging is enabled.
The command and output given are:

bolt command run 'echo $HOME' --nodes <hostname> --debug --trace --verbose
Loaded inventory from /home/adam/.puppetlabs/bolt/inventory.yaml
Analytics opt-out is set, analytics will be disabled
Skipping submission of 'command_run' screenview because analytics is disabled
Started with 100 max thread(s)
Starting: command 'echo $HOME' on <hostname>
Authentication method 'gssapi-with-mic' is not available. Please install the kerberos gem with `gem install net-ssh-krb`
Skipping submission of 'Transport initialize' event because analytics is disabled
Running command 'echo $HOME' on ["<hostname>"]
Running command 'echo $HOME' on <hostname>
Started on <hostname>...
Disabling use_agent in net-ssh: ssh-agent is not available
Finished: command 'echo $HOME' with 1 failure in 0.17 sec
Failed on 1 node: <hostname>
Ran on 1 node in 0.17 seconds

The bolt.yaml config file contains:

color: true
format: human
ssh:
  private-key: ~/.ssh/puppet-bolt
  user: puppet-bolt

/var/log/auth.log shows bolt closing the connection, and nothing else:
sshd[5761]: Connection closed by 192.168.X.X port 53270 [preauth]

SSHing with that user and private key manually works fine, and can run the command as expected.

Both bolt and the target VM are running Debian Stretch, and I am running bolt 1.0.0

Thanks.

Bolt SSH Key Verification Failure

I am unable to use bolt command run with ssh-configuration (~/.ssh/config) that uses Hostname, User, Port, IdentityFile, etc. Alternatively ssh $machine $command works fine.

I deleted my ~/.ssh/known_hosts just in case this was a part of the problem.

STEPS TO REPRODUCE:

I ran this:

vagrant up
vagrant ssh-config >> ~/.ssh/config
ssh tools.dev hostname ### <========================== SUCCESS
bolt command run 'hostname' --nodes tools.dev  ### <== FAILS

My Vagrantfile:

# -*- mode: ruby -*-
# vi: set ft=ruby :

# default constants
TIME = Time.now.strftime('%Y%m%dT%H%M%S')
VAGRANT_BOX = 'bento/ubuntu-14.04'
CONFIGFILE_HOSTS='./config/hosts'

# build hosts hash
hosts = {}
File.readlines(CONFIGFILE_HOSTS).map(&:chomp).each do |line|
  ipaddr, hostname = line.split(/\s+/)             # only grab first two columns
  hosts[hostname] = ipaddr                         # store in hash
  PRIMARY_SYSTEM = hostname if (line =~ /primary/) # match primary
end

Vagrant.configure('2') do |config|
  hosts.each do |hostname, ipaddr|
    default = if hostname == PRIMARY_SYSTEM then true else false end
    config.vm.define hostname, primary: default do |node|
      node.vm.box = VAGRANT_BOX
      node.vm.hostname = hostname
      node.vm.network 'private_network', ip: ipaddr
      node.vm.provider('virtualbox') do |vbox|
        vbox.name = "#{hostname}_#{TIME}"
        vbox.memory = 1024 if hostname =~ /es-[0-9]{2}/
        vbox.cpus = 2 if hostname =~ /es-[0-9]{2}/
      end
    end
  end
end

And sourced configuration in config/hosts:

172.16.0.30	tools.dev	primary
172.16.0.31	es-01.dev
172.16.0.32	es-02.dev
172.16.0.33	es-03.dev

EXPECTED RESULTS

As it works with ssh, expected no problem with bolt.

ACTUAL RESULTS

Using the --debug option:

2018-04-26T16:27:46.565471 DEBUG  Bolt::Inventory: Did not find config for tools.dev in inventory
2018-04-26T16:27:46.566197 DEBUG  Bolt::Executor: Started with 100 max thread(s)
2018-04-26T16:27:46.566307 INFO   Bolt::Executor: Starting: command 'hostname' on ["tools.dev"]
2018-04-26T16:27:46.623426 DEBUG  Bolt::Transport::SSH: Authentication method 'gssapi-with-mic' is not available
2018-04-26T16:27:46.623810 INFO   Bolt::Executor: Running command 'hostname' on ["tools.dev"]
2018-04-26T16:27:46.623969 DEBUG  Bolt::Transport::SSH: Running command 'hostname' on tools.dev
Started on tools.dev...
2018-04-26T16:27:46.649479 INFO   Bolt::Executor: {"node":"tools.dev","status":"failure","result":{"_error":{"kind":"puppetlabs.tasks/connect-error","msg":"Host key verification failed for tools.dev: fingerprint 77:87:4f:9c:cc:ec:e6:99:26:3e:f2:2f:5b:87:98:ac is unknown for \"[127.0.0.1]:2222\"","details":{},"issue_code":"HOST_KEY_ERROR"}}}
Failed on tools.dev:2018-04-26T16:27:46.649879 INFO   Bolt::Executor: Finished: command 'hostname' on 1 node with 1 failure

  Host key verification failed for tools.dev: fingerprint 77:87:4f:9c:cc:ec:e6:99:26:3e:f2:2f:5b:87:98:ac is unknown for "[127.0.0.1]:2222"
Failed on 1 node: tools.dev
Ran on 1 node in 0.08 seconds

The vagrant ssh-config will automatically generate a configuration like this:

Host tools.dev
  HostName 127.0.0.1
  User vagrant
  Port 2222
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /path/to/vagrant-project/.vagrant/machines/tools.dev/virtualbox/private_key
  IdentitiesOnly yes
  LogLevel FATAL

ssh configs in inventory file being ignored when using --sudo cli option

I've been trying to find a way to avoid typing a ton of flags when I run bolt. Currently, all my bolt commands require sudo, which requires tty. This means I have to pass all three of the following on the command line for each run of bolt: --sudo --tty --run-as root. This is in addition to having a unique inventory file for each of my role/profile type modules, which means I also have to use --inventoryfile inventory.yaml.

After reviewing the docs, it appears I could at least set some of these options in the group configs inside the inventory. So I did this:

groups:
  - name: sbx_haproxy
    config:
      ssh:
        run-as: root
        tty: true
    nodes:
      - proxy1.test.example.org
      - proxy2.test.example.org
  - name: sbx_ha
    config:
      ssh:
        run-as: root
        tty: true
    groups:
      - name: sbx_application
        nodes:
          - app1.test.example.org
          - app2.test.example.org
          - app3.test.example.org
          - app4.test.example.org

Then i ran this:

bolt script run "tasks/destroy.sh" --sudo --inventoryfile inventory.yaml --nodes sbx_ha

But this results in the following output (repeated for each node in the group):

Failed to clean up tempdir '/bolt/4812b322-71eb-4ef3-abc7-edc68da9f840': sudo: sorry, you must have a tty to run sudo

Failed on proxy1.test.example.org:
  Could not change owner of '/bolt/4812b322-71eb-4ef3-abc7-edc68da9f840' to root: sudo: sorry, you must have a tty to run sudo
Failed to clean up tempdir '/bolt/67e3f85c-cf65-41ae-ad75-ed3bd809c0b9': sudo: sorry, you must have a tty to run sudo

It appears that by utilizing --sudo, the other configs listed for the group aren't loaded.

Sudo password prompt

Hi there,

I just tried out the latest version of bolt and really great to see the sudo support! Currently it only works if you supply the sudo password through the commandline interface. I prefer to also have a password prompt for the sudo password (the same as it is now for the ssh password). I guess it would only require a couple of extra lines of code.

Current lines of code for password

        opts.on('-p', '--password [PASSWORD]',
                'Password to authenticate with (Optional).',
                'Omit the value to prompt for the password.') do |password|
          if password.nil?
            STDOUT.print "Please enter your password: "
            results[:password] = STDIN.noecho(&:gets).chomp
            STDOUT.puts
          else
            results[:password] = password
          end
end

But this is not there at the sudo_password field:

        opts.on('--sudo-password [PASSWORD]',
                'Password for privilege escalation') do |password|
          options[:sudo_password] = password
end

This would really be great if this can also be implemented :)

feature request: node lists seperated by spaces

Bolt currently only accept comma separated node lists.

This is often a bit awkward if you work in unix envionments because things like this that are not possible:

bolt command run "df -h" -n ceph-cluster-{001..111} # using shell expansion
awk '/Host ceph-mon/{print $2}' ~/.ssh/config|xargs echo bolt command run "df -h" -n

Bolt should accept multipe arguments for "--nodes".
If a argument contains a comma, the argument is splitted in multiple host names.

Bolt should support a gem-based plugin architecture

I was speaking to a few colleagues at the conference and we realized that the best option for Bolt plugins may be to follow in Beaker's footsteps and have a gem-based plugin architecture.

This would keep Bolt itself to a minimum, easily tested, architecture and allow the addition of user plugins in the easiest manner possible.

In theory, adding a Bolt::Backend object that does the following, should suffice:

  1. Discover the plugin configuration file
  • Each configuration file could, in theory, just default to /etc/puppetlabs/bolt/backend/<backend_name>.yaml
  1. Read the configuration file and merge the hash with the global CLI options
  • debug
  • verbose
  • config file path
  • ???
  1. Discover the available plugins installed on the system and allow them to be processed
  2. Call Bolt::Backend::::.new
  3. Configure the object
  4. Execute the backend

Asynchronous backends will probably need a send/receive mode that can be independently executed

Improve Readme documentation to include more examples about how to use Bolt with different PQL queries

In the past, MCO made it fairly easy to run arbitrary scripts against a specific set of machines based on the results of different fact values. However, since starting to use Bolt, I've been struggling to find good examples of how to do this. Can the Readme file be improved to include some more complex examples of using PQL queries with Bolt? For example:

bolt task run package action='status' name='openssl' -q 'inventory {facts.os.name = "windows" and facts.fqdn ~ "puppetdemos.net"}' --transport winrm -u 'administrator' -p ‘puppetlabs’

I would also suggest including a link with more information on how to leverage PQL queries in general, if such documentation already exists.

Allow SUDO password prompt

In our echo system we require any sudo tasks to prompt password as a result we require that we need be able to have it request the password from the user in a secure manner similar to playbooks

feature request: a summary of the executions

It would be nice to have a summary of all executions performed in a bolt run.

This would be useful for the management of problems while exection and to gather information from hosts. The summary should be reusable as a list of hosts.

Example:
If you gather information about swapspace useage of your systems by the following execution it is hard if you use bolt on many hundreds of hosts:

bolt command run "swapon -s|grep partition" -n `hget all`

A result like that would be nice:

$ bolt command run "swapon -s|grep partition" -n `hget all`
....
Ran on 216 nodes in 13.60 seconds
Successful nodes (exitcode 0): foo-ca-001,foo-cdh01,foo-cdh02,foo-cdh03,foo-ci01,foo-computing-001,foo-drop02,foo-git01-001, ....
Failed nodes (exitcode not 0): foo-vm190,foo-vm191,foo-vm192,foo-vm194,foo-vm195,foo-vm196,foo-vm197,foo-vm198...

$ bolt command run "swapoff -a" -n oo-vm190,foo-vm191,foo-vm192,foo-vm194,foo-vm195,foo-vm196,foo-vm197,foo-vm198...

feature request: ability to resolve IPs from PuppetDB

Hi!

I'm sorry if this is already implemented, but unfortunately I was not able to find any references. I'm new to Bolt.

Part of our infrastructure is hosted in AWS environment and is dynamic - new nodes are created and destroyed every now and then automatically. Some of them do not have DNS names set.

As I see Bolt is capable of performing queries against PuppetDB. And PuppetDB has different facts gathered from connected nodes, including nodes' IP addresses.

It would be great if Bold could use these facts to connect to nodes via ssh.

For example now, if I execute

# bolt command run "hostname -f" --query 'inventory { certname ~ "certname_mask" }'

I get

Failed to connect to certname_mask_1: getaddrinfo: Name or service not known
Failed to connect to certname_mask_2: getaddrinfo: Name or service not known

Thanks!

Regards,
Sergey

Kill in-progress command/task/script on target

When hitting Ctrl-C, I'd like to be able to kill in-progress execution on target nodes to prevent them completing a command that may cause problems or take a long time.

One possible UX for this is to Ctrl-C once to prevent starting any new execution (which would then wait for all in-progress execution to complete), then Ctrl-C again to kill any in-progress execution.

FEATURE: Add nodes file

Please add the ability to use a json or yaml file with a list of nodes to execute on.

For example:

bolt command run 'rpm -qa | grep openssl' --nodes nodes.yaml

nodes:
  - node1
  - node2
  - node3

bolt would then iterate through the list of nodes

Allow ~ in path to list of nodes

Right now the allowance of ~ is not uniform within bolt's parameters. It can be used in the path to a script but cannot be used in the path to a list of nodes.

For example, this works fine:

╔ ☕️  gene:~
╚ᐅ bolt script run ~/Downloads/esxi-hyperthreading-mitigation.sh \
> -u root -p $esxipw --no-host-key-check -n some-node.example.com

But this fails:

╔ ☕️  gene:~
╚ᐅ file ~/Downloads/hosts.csv
/Users/gene/Downloads/ci-hosts.csv: ASCII text

╔ ☕️  gene:~
╚ᐅ bolt script run ~/Downloads/esxi-hyperthreading-mitigation.sh \
> -u root -p $esxipw --no-host-key-check -n @~/Downloads/hosts.csv
Error attempting to read ~/Downloads/hosts.csv: No such file or directory @ rb_sysopen - ~/Downloads/hosts.csv

╔ ☕️  gene:~   [exit code 1]
╚ᐅ

I can work around this by using ${HOME} instead of ~ but would much prefer to be able to use the shorter version. An example of the workaround would be

╔ ☕️  gene:~
╚ᐅ bolt script run ~/Downloads/esxi-hyperthreading-mitigation.sh \
> -u root -p $esxipw --no-host-key-check -n @${HOME}/Downloads/hosts.csv

bolt command line: add to nodes when --nodes is passed multiple times

If I run this:

bolt command run --nodes=web5 --nodes=web6 'rpm -q puppet'

it only runs on the last host (or hosts given): "web6" in this case.

Motiviation is that i would use it in combination with this shell pattern:

bolt command run --nodes=web{5,6,7,8}.mydomain.edu 'echo hi, this is bolt'

or, even more likely:

bolt command run --nodes={web{5,6,7,8},elasticsearch{1,2,3}.subdomain}.mydomain.edu 'echo hi, this is bolt'

which would expand to:

bolt command run --nodes=web5.mydomain.edu --nodes=web6.mydomain.edu --nodes=web7.mydomain.edu --nodes=web8.mydomain.edu --nodes=elasticsearch1.subdomain.mydomain.edu --nodes=elasticsearch2.subdomain.mydomain.edu --nodes=elasticsearch3.subdomain.mydomain.edu 'echo hi, this is bolt'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.