Git Product home page Git Product logo

oh-my-vagrant's Introduction

Oh-My-Vagrant: This is: Oh My Vagrant!

Oh My Vagrant!

Build Status Documentation IRC Jenkins COPR

Status:

This project was always meant to be a useful tool for its author. Others found it helpful, and I was happy to share and help improve it. From my perspective, it's now mostly feature complete, and since I've been focusing most of my time on mgmt, there aren't any major planned changes coming. As a result please use, share and enjoy it, but development is going to be limited to what the community provides. We have other maintainers, so that I'm not a bottleneck on any patch reviews or merges! Happy hacking! -- purpleidea

Documentation:

Please see: DOCUMENTATION.md or PDF.

Questions:

Come join us in #vagrant on Freenode!

Installation:

Please read the INSTALL file for instructions on getting this installed.

Examples:

Please look in the examples/ folder for usage. If none exist, please contribute one!

Module specific notes:

Dependencies:

Note: If you are using VirtualBox as hypervisor, there is no need to depend on vagrant-libvirt.

Patches:

We'd love to have your patch! Please send it by email, or as a pull request.

Happy hacking!

oh-my-vagrant's People

Contributors

aweiteka avatar br0ziliy avatar clasohm avatar flavio-fernandes avatar goern avatar johbro avatar josephfrazier avatar mairin avatar ncoghlan avatar purpleidea avatar rtnpro avatar rtweed avatar scollier avatar zeten30 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

oh-my-vagrant's Issues

virtualbox support should be added

While I personally don't need/want this, I'm happy to have it included if someone volunteers to patch it and test it. It should be a fairly small patch. Other provisioners such as openstack, or aws could be added too.

library or app-with-plugins

The current design of OMV seems to be heading towards the "app with plugins" model. The UX here is that a user git clones OMV, then can further choose plugins like examples/kubernetes-ansible.yaml.

Without having played with this very much (so take with a grain of salt) I'm feeling like it'd be better if OMV was a shared library, specifically if it was designed to be used as a git submodule.

The thing is - what I'd like to have for Project Atomic is a git repository that when git cloned from master, is known to work starting up an Atomic cluster. With the current model as OMV expands with more functionality, it seems quite possible for that this will get harder to maintain. There are so many factors in the mix here - base box type (CentOS/Fedora) base box upgrade mechanism (traditional, atomic), target functionality (k8s, puppet, other?)

Thoughts?

Not parseable Vagrantfile in 0.0.31

I've just installed oh-my-vagrant-0.0.31-1.noarch from COPR on F21. Running omv init in a clean directory gives me:

There is a syntax error in the following Vagrantfile. The syntax error
message is reproduced below for convenience:

/usr/share/oh-my-vagrant/vagrant/Vagrantfile:1520: syntax error, unexpected end-of-input, expecting keyword_end

Downgrading to oh-my-vagrant-0.0.30-1.noarch helped.

Need to put some documentation on why we need patched version of vagrant-hostmanager plugin

subscription-manager cant get password

Hey Mr Subin, please have a look, it seem that sm is not able to ask me for my password

[goern@rh-t540p vagrant (master)]$ vagrant up
Bringing machine 'template1' up with 'libvirt' provider...
==> template1: Creating image (snapshot of base box volume).
==> template1: Creating domain with the following settings...
==> template1: -- Name: template_1410426463_8f875aa729a319697b31
==> template1: -- Domain type: kvm
==> template1: -- Cpus: 1
==> template1: -- Memory: 512M
==> template1: -- Base box: rhel-7.0-purpleidea
==> template1: -- Storage pool: default
==> template1: -- Image: /var/lib/libvirt/images/template_1410426463_8f875aa729a319697b31.img
==> template1: -- Volume Cache: default
==> template1: -- Kernel:
==> template1: -- Initrd:
==> template1: -- Command line :
==> template1: Starting domain.
==> template1: Waiting for domain to get an IP address...
==> template1: Waiting for SSH to become available...
==> template1: Starting domain.
==> template1: Waiting for domain to get an IP address...
==> template1: Waiting for SSH to become available...
==> template1: Creating shared folders metadata...
==> template1: Setting hostname...
==> template1: Rsyncing folder: /home/goern/Source/oh-my-vagrant/vagrant/ => /vagrant
==> template1: Configuring and enabling network interfaces...
==> template1: Running provisioner: shell...
template1: Running: inline script
==> template1: Notice: /Host[localhost.localdomain]/ensure: created
==> template1: host { 'localhost.localdomain':
==> template1: ensure => 'present',
==> template1: host_aliases => ['localhost'],
==> template1: ip => '127.0.0.1',
==> template1: target => '/etc/hosts',
==> template1: }
==> template1: Running provisioner: shell...
template1: Running: inline script
==> template1: Notice: /Host[template1]/ensure: removed
==> template1: host { 'template1':
==> template1: ensure => 'absent',
==> template1: }
==> template1: Running provisioner: shell...
template1: Running: inline script
==> template1: Notice: /Host[template.example.com]/ensure: created
==> template1: host { 'template.example.com':
==> template1: ensure => 'present',
==> template1: host_aliases => ['template'],
==> template1: ip => '192.168.128.3',
==> template1: target => '/etc/hosts',
==> template1: }
==> template1: Running provisioner: shell...
template1: Running: inline script
==> template1: Notice: /Host[puppet.example.com]/ensure: created
==> template1: host { 'puppet.example.com':
==> template1: ensure => 'present',
==> template1: host_aliases => ['puppet'],
==> template1: ip => '192.168.128.2',
==> template1: target => '/etc/hosts',
==> template1: }
==> template1: Running provisioner: shell...
template1: Running: inline script
==> template1: Notice: /Host[template1.example.com]/ensure: created
==> template1: host { 'template1.example.com':
==> template1: ensure => 'present',
==> template1: host_aliases => ['template1'],
==> template1: ip => '192.168.128.100',
==> template1: target => '/etc/hosts',
==> template1: }
==> template1: Running provisioner: shell...
template1: Running: inline script
==> template1: /usr/lib64/python2.7/getpass.py:83: GetPassWarning: Can not control echo on the terminal.
==> template1: passwd = fallback_getpass(prompt, stream)
==> template1: Warning: Password input may be echoed.
==> template1: Password:
==> template1: Invalid username or password. To create a login, please visit https://www.redhat.com/wapps/ugc/register.html
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

chmod +x /tmp/vagrant-shell && /tmp/vagrant-shell

Stdout from the command:

Stderr from the command:

/usr/lib64/python2.7/getpass.py:83: GetPassWarning: Can not control echo on the terminal.
passwd = fallback_getpass(prompt, stream)
Warning: Password input may be echoed.
Password:
Invalid username or password. To create a login, please visit https://www.redhat.com/wapps/ugc/register.html

[goern@rh-t540p vagrant (master)]$ cat vagrant.yaml

:domain: example.com
:network: 192.168.128.0/24
:image: rhel-7.0-purpleidea
:sync: rsync
:puppet: false
:docker: true
:cachier: false
:vms: []
:namespace: template
:count: 1
:username: [email protected]
:password:
:poolid: []
:repos: []
[goern@rh-t540p vagrant (master)]$ cd ..
[goern@rh-t540p oh-my-vagrant (master)]$ git pull
Already up-to-date.
[goern@rh-t540p oh-my-vagrant (master)]$

Inline shell commands should be more human readable

If hashicorp/vagrant#5607 is merged, this patch becomes a lot simpler.
If not, I suppose we could add a bunch of echo blah blah blah, to the top of each inline script, but that would be an ugly, ugly hack.
This is an easy patch, assuming the above is merged.
Unfortunately it would force a new version of vagrant on users.
So let's do this, but only once the above patch is in the current Fedora.
If we can make it detect the vagrant version and react accordingly, that's okay for now :)

How to define shell script in `omv.yaml`

I have in omv.yaml:

:shell:
- :script: echo hi > /home/vagrant/hello

But it doesn't get executed, without any info in vagrant.log. Am I doing something wrong? Using oh-my-vagrant rpm 1.0.1.

[RFE] Ability to configure various libvirt provisioner parameters

This annoys for a long time, time to do something about it :)
I'm using quite some storage pools in my libvirt setup, and I always have to change https://github.com/purpleidea/oh-my-vagrant/blob/master/vagrant/Vagrantfile#L1526 manually to use different storage pools.
Now, I'm thinking of having something like this in omv.yaml:

:provisioner_options:
  :libvirt:
    - driver: kvm
      graphics_type: spice
      video_type: qxl
      connect_via_ssh: false
      username: root
      storage_pool_name: default
  :virtualbox:
    - ...

This is quite big change and before diving deep I'd like to have another opinion if it's worth it to do it like this, or I just have to introduce another variable libvirt_storage_pool and be done with it?

When using atomic host as guest, second iface won't come up automatically

I'm on Fedora 21:

$ rpm -q vagrant vagrant-libvirt oh-my-vagrant
vagrant-1.7.2-9.fc21.1.noarch
vagrant-libvirt-0.0.24-5.fc21.noarch
oh-my-vagrant-1.0.0-1.noarch

$ vagrant plugin list
vagrant-hostmanager (1.5.0)
  - Version Constraint: 1.5.0
vagrant-libvirt (0.0.30, system)

$ vagrant box list
atomic-rhel-7.1 (libvirt, 0)

In guest:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:59:93:22 brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.183/24 brd 192.168.121.255 scope global dynamic eth0
       valid_lft 2937sec preferred_lft 2937sec
    inet6 fe80::5054:ff:fe59:9322/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 52:54:00:05:24:1c brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
    inet 172.17.42.1/16 scope global docker0
       valid_lft forever preferred_lft forever

$ cat /etc/sysconfig/network-scripts/ifcfg-eth0 
DEVICE="eth0"
BOOTPROTO="dhcp"
ONBOOT="yes"
TYPE="Ethernet"
PERSISTENT_DHCLIENT="yes"

$ cat /etc/sysconfig/network-scripts/ifcfg-eth1
#VAGRANT-BEGIN
# The contents below are automatically generated by Vagrant. Do not modify.
NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.127.100
NETMASK=255.255.255.0
DEVICE=eth1
PEERDNS=no
#VAGRANT-END

$ systemctl status NetworkManager
NetworkManager.service - Network Manager
   Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled)
   Active: active (running) since Mon 2015-08-24 12:02:41 UTC; 15min ago
 Main PID: 606 (NetworkManager)
   CGroup: /system.slice/NetworkManager.service
           ├─606 /usr/sbin/NetworkManager --no-daemon
           └─677 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0.pid ...

$ systemctl status network
network.service - LSB: Bring up/down networking
   Loaded: loaded (/etc/rc.d/init.d/network)
   Active: inactive (dead)

After reboot it still stays down. Bringing it up manually works:

# ifup eth1

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:59:93:22 brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.183/24 brd 192.168.121.255 scope global dynamic eth0
       valid_lft 2562sec preferred_lft 2562sec
    inet6 fe80::5054:ff:fe59:9322/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:05:24:1c brd ff:ff:ff:ff:ff:ff
    inet 192.168.127.100/24 brd 192.168.127.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe05:241c/64 scope link 
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
    inet 172.17.42.1/16 scope global docker0
       valid_lft forever preferred_lft forever

Any tips for more info, how to debug, or how to fix?

/etc/hosts entry for omv

In my omv file, I disabled any omv VMs from being created:

:namespace: omv
:count: 0

However, when I take a look at the /etc/hosts/ file on a VM, I see:

$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

## vagrant-hostmanager-start
192.168.123.100 master1.example.com master1
192.168.123.101 master2.example.com master2
192.168.123.102 master3.example.com master3
192.168.123.103 node1.example.com   node1
192.168.123.3   omv.example.com omv

Which includes an entry for omv.example.com. Any way to disable that?

Fedora naming schema lead to unidentified atomic host

OMV is using is_atomic = vm.vm.box.start_with? 'atomic-' to identift if a host is an Atomic Host, but Fedora names their boxes something like fedora/23-atomic-host or fedora/23-cloud-base so OMV will not identify the Atomic Host.

[bug] shell script seems broken

[goern@rh-t540p-goern-example-com ose3-vagrant (master)]$ cdtmp
[goern@rh-t540p-goern-example-com tmp.RSt1Xs81ba]$ omv init
Current machine states:

omv1                      not created (libvirt)

The Libvirt domain is not created. Run `vagrant up` to create it.

[goern@rh-t540p-goern-example-com tmp.RSt1Xs81ba]$ time omv up
Bringing machine 'omv1' up with 'libvirt' provider...
==> omv1: Creating image (snapshot of base box volume).
==> omv1: Creating domain with the following settings...
==> omv1:  -- Name:              omv_omv1
==> omv1:  -- Domain type:       kvm
==> omv1:  -- Cpus:              1
==> omv1:  -- Memory:            512M
==> omv1:  -- Base box:          centos7
==> omv1:  -- Storage pool:      default
==> omv1:  -- Image:             /var/lib/libvirt/images/omv_omv1.img
==> omv1:  -- Volume Cache:      default
==> omv1:  -- Kernel:            
==> omv1:  -- Initrd:            
==> omv1:  -- Graphics Type:     vnc
==> omv1:  -- Graphics Port:     5900
==> omv1:  -- Graphics IP:       127.0.0.1
==> omv1:  -- Graphics Password: Not defined
==> omv1:  -- Video Type:        cirrus
==> omv1:  -- Video VRAM:        9216
==> omv1:  -- Keymap:            en-us
==> omv1:  -- Command line : 
==> omv1: Starting domain.
==> omv1: Waiting for domain to get an IP address...
==> omv1: Waiting for SSH to become available...
==> omv1: Starting domain.
==> omv1: Waiting for domain to get an IP address...
==> omv1: Waiting for SSH to become available...
==> omv1: Creating shared folders metadata...
==> omv1: Setting hostname...
==> omv1: Rsyncing folder: /usr/share/oh-my-vagrant/vagrant/ => /vagrant
==> omv1: Configuring and enabling network interfaces...
==> omv1: Updating /etc/hosts file on active guest machines...

real    0m39.757s
user    0m1.712s
sys 0m0.319s

[goern@rh-t540p-goern-example-com tmp.RSt1Xs81ba]$ vssh
[vagrant@omv1 ~]$ file /tmp/world.txt
/tmp/world.txt: cannot open (No such file or directory)
[vagrant@omv1 ~]$ logout
Connection to 192.168.121.100 closed.

[goern@rh-t540p-goern-example-com tmp.RSt1Xs81ba]$ cat omv.yaml 

---
:domain: example.com
:network: 192.168.123.0/24
:image: centos7
:cpus: ''
:memory: ''
:boxurlprefix: ''
:sync: rsync
:folder: ''
:extern: []
:puppet: false
:classes: []
:shell:
- script: echo "hello" >/tmp/world.txt
  once: true
:docker: false
:kubernetes: false
:ansible: []
:playbook: []
:ansible_extras: {}
:cachier: false
:vms: []
:namespace: omv
:count: 1
:username: ''
:password: ''
:poolid: true
:repos: []
:update: false
:nested: false
:comment: ''
:reallyrm: false

[goern@rh-t540p-goern-example-com tmp.RSt1Xs81ba]$ vdestroy 
Unlocking shell provisioning for: omv1...
==> omv1: Removing domain...
==> omv1: Updating /etc/hosts file on active guest machines...

atomic host upgrad leaves box unusable

bringing up an (RHEL) Atomic Host and performing atomic host upgrade leaves the host unusable. Network interface configuration seems to be screwed up..

Unknown configuration section 'hostmanager'.

When I try to use OMV on my F21/libvirt system with a fairly vanilla omv.yaml file, I get an error:

$ vagrant status
There are errors in the configuration of this machine. Please fix
the following errors and try again:

Vagrant:
* Unknown configuration section 'hostmanager'.

I've noticed that if I comment out this section in the OMV Vagrantfile, the issue seems to go away:

    #
    #   hostmanager
    #
    # TODO: does this plugin still mess up the real vagrant-libvirt hostname ?
    config.hostmanager.enabled = true 
    config.hostmanager.manage_host = false  # don't manage local /etc/hosts
    config.hostmanager.ignore_private_ip = false
    config.hostmanager.include_offline = true   # manage all the hosts!
    config.hostmanager.fqdn_friendly = true     # fqdns need to work...
    config.hostmanager.domain_name = domain     # use this domain name!
    config.hostmanager.extra_hosts = [
        {
            :host => "#{vip_hostname}.#{domain}",
            :ip => "#{vip_ip}",
            :aliases => ["#{vip_hostname}"],
        }
    ]

There's a TODO comment in there, perhaps about the issue I am experiencing?

DNS management should be done with a modified vagrant-hostmanager instead of Puppet

Currently /etc/hosts is managed with puppet and the shell provider because it was an easy hack to setup when this tool was first written.

Since omv has become more generally useful, we should patch it so that puppet isn't required in the base image. This is needed to support images without puppet such as atomic.

Some patches were needed for vagrant-hostmanager:

https://github.com/purpleidea/vagrant-hostmanager

They need to be tested and cleaned up (available in feat branches)'

Omv then needs to add this feature. Also available in a feature branch.

If you want to work on this, please ping me, and I'll make sure my latest testing version is online.

Cheers!

Problematic /etc/hosts file entry in provisioned VM

When I provision VM with oh-my-vagrant, I see this /etc/hosts file in provisioned VM:

[vagrant@omv1 ~]$ cat /etc/hosts
127.0.0.1   omv1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

## vagrant-hostmanager-start
192.168.123.100 omv1

## vagrant-hostmanager-end

Problematic line is:

127.0.0.1   omv1 localhost localhost.localdomain localhost4 localhost4.localdomain4

Hostname can't resolve to 127.0.0.1 IP address. This can cause problems for various daemons resolving hostname and taking 127.0.0.1 as their IP, where it should be, in this case, 192.168.123.100. Correct /etc/hosts file in this case should be:

[vagrant@omv1 ~]$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

## vagrant-hostmanager-start
192.168.123.100 omv1

## vagrant-hostmanager-end

I'm willing to create patch for this issue, but I wasn't able to identify what is responsible for this setting. Could you help me with this?

[RFE] implement update feature for atomic hosts

update: true will result in a yum update inside the box, this is pretty nice, if you run RHEL/CentOS/Fedora Server, if is useless on Atomic Hosts.

implement update so that an Atomic Host is identified and a atomic host upgrade is called.

Do not try to unsubscribe uninitialized machines

When running a vdestroy, omv tries to unsubscribe machines that have never been initialized. Perhaps omv should check the status of the machine before trying to unsubscribe it. To reproduce the issue:

Example:

$ vs
Current machine states:

master1                   running (libvirt)
master2                   running (libvirt)
master3                   running (libvirt)
node1                     not created (libvirt)


$ vdestroy node1
Running 'subscription-manager unregister' on: node1...
==> node1: Domain is not created. Please run `vagrant up` first.
==> node1: Remove stale volume...
==> node1: Domain is not created. Please run `vagrant up` first.

[RFE] Ability to configure various libvirt VM parameters

I miss the below functionality in OMV:

  • ability to adjust number of CPUs/set RAM size for a VM
  • ability to assign additional HDD(s) to a VM

I saw some code blocks related to the HDDs, and also don't see any issues about adding RAM/CPU parameters to the omv.yaml and have Vagrantfile modified accordingly.
Does it worth looking at, or there're some strong reasons why it's still not implemented?
If the only reason is lack of time - I can try to come up with a patch.

RFE: have the shell provisioner be extendable

I would like to have a extendable shell provision going on on vagrant up|provision, so that (beside) the shell provision steps inside of the Vagrantfile additionally (user provided) shell scripts get executed. So it may be a good idea to have a provision.d/ within the vagrant/ directory.

Thanks James, Christoph

We should provide an option to update the machine

Some users might not have a completely up to date base box.
They might also want to ensure they have the latest packages if they are hacking on something that changes very often (eg: kubernetes).
Since we live in the dark ages of good internet, it might be nice to offer to update the vagrant box on first up, instead of requiring the user to do this manually, since it might be faster than re-downloading or building a new base image.

This should be a config option named 'update' or similar. It would be true/false, with a default to false.
It should only run on first up, and so should be in the "only one once" part of the Vagrantfile, eg: here:
https://github.com/purpleidea/oh-my-vagrant/blob/master/vagrant/Vagrantfile#L764 (i think).

It should run the yum update command with the shell provisioner...

Cheers,
James

PS: Bonus points for a future version of the patch:
Once we can detect atomic (not merged yet) then do an atomic update instead...

kubernetes should be easily integrated into omv

Flags to setup kubernetes out of the box should be included. This would make it easy to prototype docker clusters. In parallel, it might make sense to do this and/or integration with atomic run commands.

If kubernetes integration is added, it's not clear whether the scripting to do this should be added natively, or via an puppet/ansible add on or similar. It should probably should be built into omv natively.

add port forwarding to omv.yaml

As a section to omv.yaml so that portforwarding could be configured by Oh-My-Vagrant. This will complement vfwd. Something like

forward:
  8443:master:8433
  80:master:8080

would be nice.

Ansible runs N number of times (where N=vms.size)

Hi James,

Ansible provisioner in Vagrant is designed to run once per each host by default.
We did worked this around by having ansible.limit='all' in 9607d41
This introduced another issue: provisioner will run that number of times, how much vms are defined in :vms: [] array in omv.yml.
Quick and dirty patch us attached (just to demonstrate the idea), but it introduces another issue: with this it's impossible to define multiple playbooks per host in the omv.yml (one global in :playbook: [] array, and one "per-vm" playbook in :vms: array.

Now I don't really see the use case of having multiple playbooks per host (as you do in examples/ansible.yml - if you agree I will submit a merge request with this patch.
If you disagree - please share your thoughts with me so I can think of another solution.
ansible_provisioner_number.txt

Thanks.
vk

puppet repos by default

In my omv.yaml file, I have, which is set by default:

:puppet: false

omv is still installing these repos:

puppetlabs-deps                                                                                                                                       | 2.5 kB  00:00:00     
puppetlabs-products                                                                                                                                   | 2.5 kB  00:00:00     

I'd prefer that no puppet repos were added by default when the flag is set to false.

/etc/hosts is not updated on vdestroy

When running vdestroy on a node, omv states that it is updating the file on active guest machines.

$ vdestroy node1
Unlocking shell provisioning for: node1...
Running 'subscription-manager unregister' on: node1...
Connection to 192.168.121.72 closed.
System has been unregistered.
==> node1: Removing domain...
==> node1: Updating /etc/hosts file on active guest machines...

However, when I log into an active machine and check /etc/hosts the entry is still there:

$ vssh master2

[vagrant@master2 ~]$ sudo -i

[root@master2 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

## vagrant-hostmanager-start
192.168.123.100 master1.example.com master1
192.168.123.101 master2.example.com master2
192.168.123.102 master3.example.com master3
192.168.123.103 node1.example.com   node1
192.168.123.3   omv.example.com omv

Should omv / the hostmanager plugin actually be updating? My version of vagrant on the host:

$ rpm -qa vagrant
vagrant-1.7.2-7.fc21.1.noarch

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.