Git Product home page Git Product logo

dodai-deploy's People

Contributors

guanxiaohua2k6 avatar kazu-tanaka avatar wangyjsh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dodai-deploy's Issues

create-volume-group.sh doesn't persist loopback devices

I use dodai-deploy's create-volume-group.sh script to create a loopback device for a volume group. Now, after rebooting the volume group disappears which can be seen by issuing

sudo vgdisplay

I found that I can persist the loopback device by adding an upstart job in Ubuntu server (as explained here). Maybe some pointer to this in the documentation would be nice or the script can be improved to automatically do something similar.

nova-api should be installed last in the nova proposal

After running into some problems with nova-api's meta-data server and UEC guest VMs being unable to import public keys from it, it seems one has to restart the nova-api service for the meta-data server to configure correctly.

I was doing a 1-node all-default dodai-deploy install of OpenStack so after consulting some of the folks at #openstack, we suspect the issue might be with nova-api being installed before nova-compute - nova-api should be installed last. To quote a kind person who helped me fixed my problem:

zynzel 15:07:34
kermit666: my idea is that you install nova-compute after nova-api
so nova-api was starting before compute so it dont start metadata-api
but this is my guess

I posted a more detailed problem and solution description at AskUbuntu.

dodai-deploy on 2 nodes

Hi,

as you suggested I'm contacting you by email about a problem I'm facing with my dodai-deploy installation.
http://www.guanxiaohua2k6.com/2012/04/install-openstack-nova-essexmultiple.html?showComment=1342077925664#c6214097199447870768

After I checked both servers where runing I was able too install keystone on the localhost, but when I tried to install glance it showed the proposal as installed but on the server glance is nonexistant.
the job_server.log is :

{:operation=>"install", :params=>{:proposal_id=>"2"}}
Start install[proposal - 2]
install components
Determining the amount of hosts matching filter for 5 seconds .... 1

1 / 1

[#<MCollective::RPC::Result:0x7f94d0cb51f8 @results={:data=>{:output=>"\e[0;32minfo: Creating a new SSL key for server2.gemalto.com\e[0m\n\e[0;32minfo: Caching certificate for ca\e[0m\n\e[0;32minfo: Creating a new SSL certificate request for server2.gemalto.com\e[0m\n\e[0;32minfo: Certificate Request fingerprint (md5): 7C:8A:3B:7C:B2:4E:4A:62:03:FF:F9:95:49:83:03:CB\e[0m\n\e[0;32minfo: Caching certificate for server2.gemalto.com\e[0m\n\e[0;32minfo: Caching certificate_revocation_list for ca\e[0m\n\e[0;32minfo: Caching catalog for server2.gemalto.com\e[0m\n\e[0;32minfo: Applying configuration version '1342079262'\e[0m\n\e[0;32minfo: Creating state file /var/lib/puppet/state/state.yaml\e[0m\n\e[0;36mnotice: Finished catalog run in 0.03 seconds\e[0m\n"}, :statuscode=>0, :statusmsg=>"OK", :sender=>"Server2.gemalto.com"}, @action="runonce", @agent="puppetd">]
--- !ruby/object:MCollective::RPC::Result
action: runonce
agent: puppetd
results:
:data:
:output: "\e[0;32minfo: Creating a new SSL key for server2.gemalto.com\e[0m\n
\e[0;32minfo: Caching certificate for ca\e[0m\n
\e[0;32minfo: Creating a new SSL certificate request for server2.gemalto.com\e[0m\n
\e[0;32minfo: Certificate Request fingerprint (md5): 7C:8A:3B:7C:B2:4E:4A:62:03:FF:F9:95:49:83:03:CB\e[0m\n
\e[0;32minfo: Caching certificate for server2.gemalto.com\e[0m\n
\e[0;32minfo: Caching certificate_revocation_list for ca\e[0m\n
\e[0;32minfo: Caching catalog for server2.gemalto.com\e[0m\n
\e[0;32minfo: Applying configuration version '1342079262'\e[0m\n
\e[0;32minfo: Creating state file /var/lib/puppet/state/state.yaml\e[0m\n
\e[0;36mnotice: Finished catalog run in 0.03 seconds\e[0m\n"
:statuscode: 0
:statusmsg: OK
:sender: Server2.gemalto.com
install[proposal - 2] finished
{:operation=>"test", :params=>{:proposal_id=>"2"}}
Start test[proposal - 2]
Determining the amount of hosts matching filter for 5 seconds .... 1

1 / 1

[#<MCollective::RPC::Result:0x7f94d0c0e3f8 @results={:data=>{:output=>"\e[0;32minfo: Caching catalog for server2.gemalto.com\e[0m\n\e[0;32minfo: Applying configuration version '1342079262'\e[0m\n\e[0;36mnotice: Finished catalog run in 0.03 seconds\e[0m\n"}, :statuscode=>0, :statusmsg=>"OK", :sender=>"Server2.gemalto.com"}, @action="runonce", @agent="puppetd">]
--- !ruby/object:MCollective::RPC::Result
action: runonce
agent: puppetd
results:
:data:
:output: "\e[0;32minfo: Caching catalog for server2.gemalto.com\e[0m\n
\e[0;32minfo: Applying configuration version '1342079262'\e[0m\n
\e[0;36mnotice: Finished catalog run in 0.03 seconds\e[0m\n"
:statuscode: 0
:statusmsg: OK
:sender: Server2.gemalto.com
test[proposal - 2] finished

and the yaml file on the seond node :


"File[/var/lib/puppet/facts]":
!ruby/sym checked: 2012-07-12 09:51:45.268143 +02:00
!ruby/sym synced: 2012-07-12 09:51:45.137156 +02:00
"Class[Main]":
!ruby/sym checked: 2012-07-12 10:07:00.142167 +02:00
"File[/var/lib/puppet/ssl/private_keys]":
!ruby/sym checked: 2012-07-12 09:51:45.271698 +02:00
!ruby/sym synced: 2012-07-12 09:51:45.142976 +02:00
"File[/etc/puppet/puppet.conf]":
!ruby/sym checked: 2012-07-12 09:51:46.763554 +02:00
!ruby/sym configuration: {}
"File[/var/lib/puppet/ssl/certificate_requests]":
!ruby/sym checked: 2012-07-12 09:51:45.279479 +02:00
!ruby/sym synced: 2012-07-12 09:51:45.159266 +02:00
"Stage[main]":
!ruby/sym checked: 2012-07-12 10:07:00.143555 +02:00
"Class[Glance_e]":
!ruby/sym checked: 2012-07-12 10:07:00.140399 +02:00
"File[/var/lib/puppet/client_data]":
!ruby/sym checked: 2012-07-12 09:51:46.764492 +02:00
!ruby/sym synced: 2012-07-12 09:51:45.149191 +02:00
"File[/var/lib/puppet/ssl/private]":
!ruby/sym checked: 2012-07-12 09:51:45.277418 +02:00
!ruby/sym synced: 2012-07-12 09:51:45.154989 +02:00
"File[/var/lib/puppet/clientbucket]":
!ruby/sym checked: 2012-07-12 09:51:46.767397 +02:00
!ruby/sym synced: 2012-07-12 09:51:45.157621 +02:00
"File[/var/lib/puppet/client_yaml]":
!ruby/sym checked: 2012-07-12 09:51:46.766446 +02:00
!ruby/sym synced: 2012-07-12 09:51:45.153167 +02:00
"File[/var/lib/puppet/lib]":
!ruby/sym checked: 2012-07-12 09:51:45.278580 +02:00
!ruby/sym synced: 2012-07-12 09:51:45.156239 +02:00
"File[/var/log/puppet]":
!ruby/sym checked: 2012-07-12 09:51:45.269008 +02:00
"File[/var/lib/puppet]":
!ruby/sym checked: 2012-07-12 09:51:45.267087 +02:00
"File[/var/lib/puppet/state]":
!ruby/sym checked: 2012-07-12 09:51:45.276414 +02:00
!ruby/sym synced: 2012-07-12 09:51:45.150469 +02:00
"File[/etc/puppet]":
!ruby/sym checked: 2012-07-12 09:51:45.266200 +02:00
"File[/var/lib/puppet/ssl/public_keys]":
!ruby/sym checked: 2012-07-12 09:51:45.272900 +02:00
!ruby/sym synced: 2012-07-12 09:51:45.144669 +02:00
"Class[Settings]":
!ruby/sym checked: 2012-07-12 10:07:00.135979 +02:00
"File[/var/run/puppet]":
!ruby/sym checked: 2012-07-12 09:51:45.275386 +02:00
!ruby/sym synced: 2012-07-12 09:51:45.147741 +02:00
"File[/var/lib/puppet/ssl]":
!ruby/sym checked: 2012-07-12 09:51:45.270091 +02:00
!ruby/sym synced: 2012-07-12 09:51:45.140926 +02:00
"File[/var/lib/puppet/state/graphs]":
!ruby/sym checked: 2012-07-12 09:51:46.765384 +02:00
!ruby/sym synced: 2012-07-12 09:51:45.151799 +02:00
"Filebucket[puppet]":
!ruby/sym checked: 2012-07-12 10:07:00.137387 +02:00
"File[/var/lib/puppet/ssl/certs]":
!ruby/sym checked: 2012-07-12 09:51:45.274179 +02:00
!ruby/sym synced: 2012-07-12 09:51:45.146368 +02:00

is there another step I need to follow to completely install glance like running a "puppet apply" or is it something else?

thanx a lot for your time,
Narjisse

Nova's proposal Test failed

After successful installation of "openstack nova diablo" Software, test failed

// Log's output:

info: Caching catalog for ostack-node2.gfi.su
info: Applying configuration version '1322480186'
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: rm: cannot remove `nova.zip': No such file or directory
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Archive: nova.zip
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: extracting: env/novarc
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: extracting: env/pk.pem
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: extracting: env/cert.pem
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: extracting: env/cacert.pem
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: default None None icmp -1 -1 0.0.0.0/0
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: ApiError: {'to_port': -1, 'cidr': u'0.0.0.0/0', 'from_port': -1, 'protocol': 'icmp', 'parent_group_id': 1L} - This rule already exists in group
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: default None None tcp 22 22 0.0.0.0/0
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: ApiError: {'to_port': 22, 'cidr': u'0.0.0.0/0', 'from_port': 22, 'protocol': 'tcp', 'parent_group_id': 1L} - This rule already exists in group
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Reading package lists...
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Building dependency tree...
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Reading state information...
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: cloud-utils is already the newest version.
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: 0 upgraded, 0 newly installed, 0 to remove and 60 not upgraded.
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: /tmp/nova/image_kvm.tgz
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: ttylinux-uec-amd64-12.1_2.6.35-22_1-floppy
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: ttylinux-uec-amd64-12.1_2.6.35-22_1.img
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: ttylinux-uec-amd64-12.1_2.6.35-22_1-initrd
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: ttylinux-uec-amd64-12.1_2.6.35-22_1-loader
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Checking image
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Encrypting image
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Splitting image...
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Part: kernel.part.00
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Generating manifest /tmp/kernel.manifest.xml
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Checking bucket: mybucket
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Uploading manifest file
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Uploading part: kernel.part.00
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Uploaded image as mybucket/kernel.manifest.xml
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Checking image
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Encrypting image
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Splitting image...
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Part: ramdisk.part.00
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Generating manifest /tmp/ramdisk.manifest.xml
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Checking bucket: mybucket
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Uploading manifest file
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Uploading part: ramdisk.part.00
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Uploaded image as mybucket/ramdisk.manifest.xml
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Checking image
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Encrypting image
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Splitting image...
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Part: image.part.00
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Part: image.part.01
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Generating manifest /tmp/image.manifest.xml
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Checking bucket: mybucket
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Uploading manifest file
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Uploading part: image.part.00
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Uploading part: image.part.01
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Uploaded image as mybucket/image.manifest.xml
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: image: [<?xml]
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: euca-run-instances <?xml -k mykey -t m1.tiny
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Warning: failed to parse error message from AWS: :2:66: unclosed token
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Traceback (most recent call last):
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: File "/usr/bin/euca-run-instances", line 246, in
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: main()
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: File "/usr/bin/euca-run-instances", line 237, in main
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: euca.display_error_and_exit('%s' % ex)
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: File "/usr/lib/python2.7/dist-packages/euca2ools/init.py", line 1435, in display_error_and_exit
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: dom = minidom.parseString(msg)
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: File "/usr/lib/python2.7/xml/dom/minidom.py", line 1924, in parseString
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: return expatbuilder.parseString(string)
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: File "/usr/lib/python2.7/xml/dom/expatbuilder.py", line 940, in parseString
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: return builder.parseString(string)
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: File "/usr/lib/python2.7/xml/dom/expatbuilder.py", line 223, in parseString
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: parser.Parse(string, True)
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: xml.parsers.expat.ExpatError: unclosed token: line 1, column 87
notice: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: Executing command euca-run-instances failed.
err: /Stage[main]/Nova_api::Test/Exec[/tmp/nova/test.sh image_kvm.tgz nova admin 2>&1]/returns: change from notrun to 0 failed: /tmp/nova/test.sh image_kvm.tgz nova admin 2>&1 returned 1 instead of one of [0] at /etc/puppet/manifests/packages/nova.pp:143
notice: Finished catalog run in 66.16 seconds

//------------------------------------

// OS Ubuntu 11.04

/* Config items */

network_ip_range 10.0.0.0/24
libvirt_type kvm
glance_host 169.254.0.1
user admin
password admin
email [email protected]
project nova
dashboard_port 8000

/* Node configs */

// ostack-node2.gfi.su:

dashboard_r46
mysql
nova_api
nova_compute
nova_network
nova_objectstore
nova_scheduler
nova_volume
rabbitmq

/* Component configs */

// dashboard_r46
// local_settings.py:

import os

DEBUG = True
TEMPLATE_DEBUG = DEBUG
PROD = False
USE_SSL = False

LOCAL_PATH = os.path.dirname(os.path.abspath(file))
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(LOCAL_PATH, 'dashboard_openstack.sqlite3'),
},
}

CACHE_BACKEND = 'dummy://'

Configure these for your outgoing email host

EMAIL_HOST = 'smtp.my-company.com'

EMAIL_PORT = 25

EMAIL_HOST_USER = 'djangomail'

EMAIL_HOST_PASSWORD = 'top-secret!'

NOVA_DEFAULT_ENDPOINT = 'http://<%= nova_api %>:8773/services/Cloud'
NOVA_DEFAULT_REGION = 'nova'
NOVA_ACCESS_KEY = os.environ.get("EC2_ACCESS_KEY", "")
NOVA_SECRET_KEY = os.environ.get("EC2_SECRET_KEY", "")
NOVA_ADMIN_USER = '<%= user %>'
NOVA_PROJECT = '<%= project %>'

// nova_compute

/etc/nova/nova-compute.conf --libvirt_type=<%= libvirt_type %>

/* Software configs */
// /etc/nova/nova.conf:

--sql_connection=mysql://root:nova@<%= mysql %>/nova

--s3_host=<%= nova_objectstore %>

--rabbit_host=<%= rabbitmq %>
--cc_host=<%= nova_api %>
--ec2_url=http://<%= nova_api %>:8773/services/Cloud

--daemonize=1

--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge

--FAKE_subdomain=ec2

--ca_path=/var/lib/nova/CA
--keys_path=/var/lib/nova/keys
--networks_path=/var/lib/nova/networks
--instances_path=/var/lib/nova/instances
--images_path=/var/lib/nova/images
--buckets_path=/var/lib/nova/buckets

--vlan_interface=eth1
--flat_interface=eth1
--volume_group=<%= nova_volume %>

--network_manager=nova.network.manager.VlanManager

--libvirt_type=<%= libvirt_type %>

--glance_api_servers=<%= glance_host %>:9292

--image_service=nova.image.glance.GlanceImageService

--lock_path=/var/lib/nova/tmp
--logdir=/var/log/nova
--verbose
--use_deprecated_auth=true

// EOF

May you help me to correct this issue ?
May be i did something wrong...

Unable to delete demo tenant/project and example users

Greetings,

After successfully deploying with dodai-deploy, I am running into the issue of not being able to remove or example users and projects. It always breaks authentication for the admin user. Can you verify this?

The nova-network wasn't started.

After I installed nova, I confirmed the status of service nova-network with the following command, and found nova-network wasn't started.

$ status nova-network
nova-network stop/waiting

BTW, there are error messages in /etc/nova/nova-network.

2011-11-07 10:44:37,499 CRITICAL nova [-] Unexpected error while running command.
Command: sudo iptables-save -t filter
Exit code: 1
Stdout: ''
Stderr: 'sudo: iptables-save: command not found\n'
(nova): TRACE: Traceback (most recent call last):
(nova): TRACE: File "/usr/bin/nova-network", line 49, in
(nova): TRACE: service.wait()
(nova): TRACE: File "/usr/lib/python2.7/dist-packages/nova/service.py", line 357, in wait
(nova): TRACE: _launcher.wait()
(nova): TRACE: File "/usr/lib/python2.7/dist-packages/nova/service.py", line 107, in wait
(nova): TRACE: service.wait()
(nova): TRACE: File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 166, in wait
(nova): TRACE: return self._exit_event.wait()
(nova): TRACE: File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait
(nova): TRACE: return hubs.get_hub().switch()
(nova): TRACE: File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 177, in switch
(nova): TRACE: return self.greenlet.switch()
(nova): TRACE: File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 192, in main
(nova): TRACE: result = function(_args, *_kwargs)
(nova): TRACE: File "/usr/lib/python2.7/dist-packages/nova/service.py", line 77, in run_server
(nova): TRACE: server.start()
(nova): TRACE: File "/usr/lib/python2.7/dist-packages/nova/service.py", line 137, in start
(nova): TRACE: self.manager.init_host()
(nova): TRACE: File "/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 1002, in init_host
(nova): TRACE: self.driver.init_host()
(nova): TRACE: File "/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 404, in init_host
(nova): TRACE: iptables_manager.apply()
(nova): TRACE: File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 685, in inner
(nova): TRACE: retval = f(_args, *_kwargs)
(nova): TRACE: File "/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 312, in apply
(nova): TRACE: attempts=5)
(nova): TRACE: File "/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 735, in _execute
(nova): TRACE: return utils.execute(_cmd, *_kwargs)
(nova): TRACE: File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 188, in execute
(nova): TRACE: cmd=' '.join(cmd))
(nova): TRACE: ProcessExecutionError: Unexpected error while running command.
(nova): TRACE: Command: sudo iptables-save -t filter
(nova): TRACE: Exit code: 1
(nova): TRACE: Stdout: ''
(nova): TRACE: Stderr: 'sudo: iptables-save: command not found\n'

Swift storage cannot be installed on different machine from swift proxy.

According to the log, the following error occurred.

notice: /Stage[main]/Swift_storage::Install/Exec[/tmp/swift/storage-init.sh sdb1 3 2>&1]/returns: File "/usr/lib/python2.7/gzip.py", line 89, in init

notice: /Stage[main]/Swift_storage::Install/Exec[/tmp/swift/storage-init.sh sdb1 3 2>&1]/returns: fileobj = self.myfileobj = builtin.open(filename, mode or 'rb')

notice: /Stage[main]/Swift_storage::Install/Exec[/tmp/swift/storage-init.sh sdb1 3 2>&1]/returns: IOError: [Errno 2] No such file or directory: '/etc/swift/account.ring.gz'

The connection between mcollective and activemq was reset.

Here is the related error message from file activemq.log.

2012-12-05 08:38:08,595 | INFO  | Transport failed: java.net.SocketException: Connection reset | org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ Transport: tcp:///54.248.155.182:42958

Cannot log into openstack web interface.

I am at the step where I am to now log into the OpenStack web interface. However noting is responding on port 80 when I go to either ubuntu1 or its IP address. What services should I check?

Anonymous

Nova compute installation failed on multi-node environment

I met a strange problem. When I tried to install the component of nova-compute on more than 4 nodes, intermittenly some of nodes would reported one puppet error: "Error 400 on SERVER: Could not find class ...". Here is a log segment from /var/log/syslog for your reference:

Oct 24 09:33:11 puppetcilent2 puppet-agent[41721]: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find class nova_e::nova_compute::install for puppetcilent2.dodai.com on node puppetcilent2.dodai.com
Oct 24 09:33:11 puppetcilent2 puppet-agent[41721]: Not using cache on failed catalog
Oct 24 09:33:11 puppetcilent2 puppet-agent[41721]: Could not retrieve catalog; skipping run
Oct 24 09:35:05 puppetcilent2 puppet-agent[42131]: Caching catalog for puppetcilent2.dodai.com
Oct 24 09:35:05 puppetcilent2 puppet-agent[42131]: Applying configuration version '1351042243'

dodai-deploy and centos

Hello,

I was curious to know how hard would it be to get dodai-deploy working on centos?

I'm trying to find a easy way to automate the VM creation on my openstack and your tool looks like it might be what I want.

But we are a centos/redhat based and your software says ubuntu support.

outside of the setup.sh script, are there a lot of other files ubuntu specific?

Thanks
Chris

Error Installing Folscom Swift on Ubuntu 12.04

I get the following error when trying to install Folsom Swift:

info: Loading facts in /etc/puppet/modules/apt/lib/facter/apt_version.rb
info: Caching catalog for cn01040804.ecloud.nii.ac.jp
info: Applying configuration version '1353906220'
notice: /Stage[main]/Swift_f::Common/File[/etc/swift/cert.crt]/ensure: defined content as '{md5}80e307e35b94689794f6bdfdf2edcba9'
notice: /Stage[main]/Swift_f::Common/File[/etc/swift/cert.key]/ensure: defined content as '{md5}2647a92f16f9b6ec3b75dc4cb1dabe9a'
notice: /Stage[main]/Swift_f::Common/File[/etc/swift/swift.conf]/ensure: defined content as '{md5}92a37059ce0675c373c4292054d50b81'
notice: /Stage[main]/Swift_f::Swift_proxy::Install/Package[swift]/ensure: ensure changed 'purged' to 'present'
notice: /Stage[main]/Swift_f::Swift_proxy::Install/Package[swift-proxy]/ensure: ensure changed 'purged' to 'present'
notice: /Stage[main]/Swift_f::Swift_proxy::Install/File[/etc/swift/proxy-server.conf]/ensure: defined content as '{md5}bbe923b2e959d1e9907393e05c660276'
notice: /Stage[main]/Swift_f::Swift_proxy::Install/Package[swauth]/ensure: ensure changed 'purged' to 'present'
notice: /Stage[main]/Swift_f::Swift_proxy::Install/Exec[/tmp/swift/proxy-init.sh sdb1 1 2>&1]/returns: No proxy-server running
notice: /Stage[main]/Swift_f::Swift_proxy::Install/Exec[/tmp/swift/proxy-init.sh sdb1 1 2>&1]/returns: Device z1-136.187.33.85:6002/sdb1_"" with 100.0 weight got id 0
notice: /Stage[main]/Swift_f::Swift_proxy::Install/Exec[/tmp/swift/proxy-init.sh sdb1 1 2>&1]/returns: Device z1-136.187.33.85:6001/sdb1_"" with 100.0 weight got id 0
notice: /Stage[main]/Swift_f::Swift_proxy::Install/Exec[/tmp/swift/proxy-init.sh sdb1 1 2>&1]/returns: Device z1-136.187.33.85:6000/sdb1_"" with 100.0 weight got id 0
notice: /Stage[main]/Swift_f::Swift_proxy::Install/Exec[/tmp/swift/proxy-init.sh sdb1 1 2>&1]/returns: Reassigned 262144 (100.00%) partitions. Balance is now 0.00.
notice: /Stage[main]/Swift_f::Swift_proxy::Install/Exec[/tmp/swift/proxy-init.sh sdb1 1 2>&1]/returns: Reassigned 262144 (100.00%) partitions. Balance is now 0.00.
notice: /Stage[main]/Swift_f::Swift_proxy::Install/Exec[/tmp/swift/proxy-init.sh sdb1 1 2>&1]/returns: Reassigned 262144 (100.00%) partitions. Balance is now 0.00.
notice: /Stage[main]/Swift_f::Swift_proxy::Install/Exec[/tmp/swift/proxy-init.sh sdb1 1 2>&1]/returns: Starting proxy-server...(/etc/swift/proxy-server.conf)
notice: /Stage[main]/Swift_f::Swift_proxy::Install/Exec[/tmp/swift/proxy-init.sh sdb1 1 2>&1]/returns: Traceback (most recent call last):
notice: /Stage[main]/Swift_f::Swift_proxy::Install/Exec[/tmp/swift/proxy-init.sh sdb1 1 2>&1]/returns: File "/usr/bin/swift-proxy-server", line 22, in
notice: /Stage[main]/Swift_f::Swift_proxy::Install/Exec[/tmp/swift/proxy-init.sh sdb1 1 2>&1]/returns: run_wsgi(conf_file, 'proxy-server', default_port=8080, **options)
notice: /Stage[main]/Swift_f::Swift_proxy::Install/Exec[/tmp/swift/proxy-init.sh sdb1 1 2>&1]/returns: File "/usr/lib/python2.7/dist-packages/swift/common/wsgi.py", line 126, in run_wsgi
notice: /Stage[main]/Swift_f::Swift_proxy::Install/Exec[/tmp/swift/proxy-init.sh sdb1 1 2>&1]/returns: log_to_console=kwargs.pop('verbose', False), log_route='wsgi')
notice: /Stage[main]/Swift_f::Swift_proxy::Install/Exec[/tmp/swift/proxy-init.sh sdb1 1 2>&1]/returns: File "/usr/lib/python2.7/dist-packages/swift/common/utils.py", line 608, in get_logger
notice: /Stage[main]/Swift_f::Swift_proxy::Install/Exec[/tmp/swift/proxy-init.sh sdb1 1 2>&1]/returns: raise e
notice: /Stage[main]/Swift_f::Swift_proxy::Install/Exec[/tmp/swift/proxy-init.sh sdb1 1 2>&1]/returns: socket.error: [Errno 111] Connection refused
err: /Stage[main]/Swift_f::Swift_proxy::Install/Exec[/tmp/swift/proxy-init.sh sdb1 1 2>&1]/returns: change from notrun to 0 failed: /tmp/swift/proxy-init.sh sdb1 1 2>&1 returned 1 instead of one of [0] at /etc/puppet/modules/swift_f/manifests/swift_proxy/install.pp:29
notice: Finished catalog run in 26.57 seconds

Any support would be greatly appreciated.

Glance installation failed due to "ImportError: No module named httplib2"

When I installed glance, the following error occurred.

Traceback (most recent call last):
File "/usr/bin/glance-manage", line 47, in
import glance.registry.db
File "/usr/lib/python2.7/dist-packages/glance/registry/init.py", line 24, in
from glance.registry import client
File "/usr/lib/python2.7/dist-packages/glance/registry/client.py", line 26, in
from glance.common.client import BaseClient
File "/usr/lib/python2.7/dist-packages/glance/common/client.py", line 13, in
from glance.common import auth
File "/usr/lib/python2.7/dist-packages/glance/common/auth.py", line 33, in
import httplib2
ImportError: No module named httplib2

issue with nova essex install

The error logs indicate that the packages were not installed because -y was used with out the --force-yes command being issued.
I was wondering if you might know what went wrong, thank you in advance for taking the time to respond to my inquiry.

-Sean-

Swift installation failed due to "/etc/swift cannot be created."

Due to the following error, swift installation failed.

notice: /Stage[main]/Swift_common/File[/tmp/swift]/ensure: created
err: /Stage[main]/Swift_common/File[/etc/swift]/ensure: change from absent to directory failed: Could not set 'directory on ensure: Could not find user swift at /etc/puppet/manifests/packages/swift.pp:51

Glance installation failed due to "Could not find template"

err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find template '2/etc/glance/glance-registry.conf.erb' at /etc/puppet/manifests/packages/glance.pp:36 on node ip-10-3-4-16
warning: Not using cache on failed catalog
err: Could not retrieve catalog; skipping run

Problem when testing nova.

notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: image: [ami-00000003]
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: KEYPAIR mykey
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: euca-run-instances ami-00000003 -k mykey -t m1.tiny
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: instance: [i-00000001]
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: pending
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: pending
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: pending
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: pending
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: pending
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: pending
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: pending
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: pending
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: pending
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: pending
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: pending
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: pending
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: error
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: error
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: error
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: error
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: error
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: error
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: error
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: error
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: error
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: error
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: error
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: error
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: error
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: error
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: error
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Instance status: error
notice: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: Running instance failed because the instance status wasn't "launching" after 120 seconds.
err: /Stage[main]/Nova_e::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 2>&1]/returns: change from notrun to 0 failed: /var/lib/nova/test.sh image_kvm.tgz 2>&1 returned 1 instead of one of [0] at /etc/puppet/modules/nova_e/manifests/nova_api/test.pp:18
notice: Finished catalog run in 202.38 seconds

Cannot SSH or Ping instances on Folsom Nova

I installed folsom compute on Ubuntu 12.04 without issue. When I launched an instance with ip 10.0.0.3 I am unable to ping or SSH to the instance.

When I installed essex on a previous setup, I could ssh and ping to the instances without issue (although network performance was very slow ~1MB/s)

I noticed the virbr0 adapter has address of 192.168.122.1

Could there be an issue with the setup?

proposal install fail

When I exec the "proposal install" to setup SWIFT, it always returned a fail.

The log show:

err: Could not retrieve catalog from remote server: Error 400 on SERVER: Puppet::Parser::AST::Resource failed with error ArgumentError: Invalid resource type concat at /etc/puppet/modules/apt/manifests/init.pp:59 on node dodai-proxy
warning: Not using cache on failed catalog
err: Could not retrieve catalog; skipping run

Hadoop installation failed due to "Could not find template"

err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find template '4/conf/core-site.xml.erb' at /etc/puppet/manifests/packages/hadoop.pp:38 on node ip-10-3-4-16
warning: Not using cache on failed catalog
err: Could not retrieve catalog; skipping run

Connection refused

I have tried installing openstack essex on a 3 node deployment, with 1 controller, 1 image, and 1 compute node. When i finish starting up the web UI and finish adding the nodes, I made a proposal to install keystone on my controller. When i finished creating the proposal, and i tried installing the software, I got a message install failed. The error logs says

info: Loading facts in /etc/puppet/modules/apt/lib/facter/apt_version.rb
err: Could not retrieve catalog from remote server: Connection refused - connect(2)
warning: Not using cache on failed catalog
err: Could not retrieve catalog; skipping run
err: Could not send report: Connection refused - connect(2)

Does anyone know what is the issue?? The architecture is 3 nodes connected through their eth0's to a common switch which is also connected to the internet.

Node not visible

I'm experiencing a problem with my 8-node setup that I'm trying to control using dodai-deploy. One of the slave nodes doesn't appear on the "add node" list in the web interface of the master node. It also isn't listed when I issue:

ruby script/cli.rb node_candidate list

All the other slave nodes are working correctly and are showing up on the web interface. I installed the dodai-deploy client to all the same slave nodes the same way (using cluster-ssh) and I can ping in both direction between the master and the troublesome slave node. This is the nmap output for the troublesome node (excluding the details of the ip and hostname, which appear to be fine):

rDNS record for xx.xx.xx.xx:
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
111/tcp open rpcbind
2049/tcp open nfs

Do you have any idea how I might go on debugging the problem? Is there some way to test that the slave daemon is running OK?

Fail to install any components on client node

The issue just seems like Issue #20. I have two nodes: 192.168.3.1 (puppetserver.dodai.com) for dodai server and 192.168.3.2(clientnode.dodai.com) for dodai client node. I saw the proposal's status has been changed to "installed". But nothing happened on client node. Here's some info for your reference. Thanks a lot.

dodai-deploy/log/job_server.log:

[#<MCollective::RPC::Result:0x7fc47449f3f0 @agent="puppetd", @action="runonce", @results={:sender=>"clientnode.dodai.com", :data=>{:output=>"\e[0;32minfo: Caching catalog for clientnode.dodai.com\e[0m\n\e[0;32minfo: Applying configuration version '1346037947'\e[0m\n\e[0;36mnotice: Finished catalog run in 0.02 seconds\e[0m\n"}, :statuscode=>0, :statusmsg=>"OK"}>]
--- !ruby/object:MCollective::RPC::Result
action: runonce
agent: puppetd
results:
:sender: clientnode.dodai.com
:data:
:output: "\e[0;32minfo: Caching catalog for clientnode.dodai.com\e[0m\n
\e[0;32minfo: Applying configuration version '1346037947'\e[0m\n
\e[0;36mnotice: Finished catalog run in 0.02 seconds\e[0m\n"
:statuscode: 0
:statusmsg: OK
install[proposal - 2] finished

dodai-deploy/log/development.log:

Started GET "/proposals" for 10.239.37.23 at Mon Aug 27 01:56:05 -0400 2012
Processing by ProposalsController#index as
^[[1m^[[36mProposal Load (0.5ms)^[[0m ^[[1mSELECT "proposals".* FROM "proposals"^[[0m
^[[1m^[[35mSoftware Load (0.2ms)^[[0m SELECT "softwares".* FROM "softwares" WHERE "softwares"."id" = 4 LIMIT 1
Rendered layouts/_menu.html.erb (2.1ms)
Rendered proposals/index.html.erb within layouts/application (33.1ms)
Completed 200 OK in 120ms (Views: 98.5ms | ActiveRecord: 0.7ms)
^[[1m^[[35mNode Load (0.3ms)^[[0m SELECT "nodes".* FROM "nodes" WHERE "nodes"."name" = 'clientnode.dodai.com' LIMIT 1
^[[1m^[[36mProposal Load (0.2ms)^[[0m ^[[1mSELECT "proposals".* FROM "proposals" WHERE "proposals"."id" = 2 LIMIT 1^[[0m
^[[1m^[[35mNode Load (0.2ms)^[[0m SELECT "nodes".* FROM "nodes" WHERE "nodes"."id" = 1 LIMIT 1
^[[1m^[[36mAREL (0.2ms)^[[0m ^[[1mINSERT INTO "logs" ("content", "node_id", "updated_at", "operation", "created_at", "proposal_id") VALUES ('info: Caching catalog for clientnode.dodai.com
info: Applying configuration version ''1346037947''
notice: Finished catalog run in 0.02 seconds
', 1, '2012-08-27 05:56:06.235928', 'install', '2012-08-27 05:56:06.235928', 2)^[[0m
^[[1m^[[35mAREL (0.2ms)^[[0m UPDATE "node_configs" SET "updated_at" = '2012-08-27 05:56:06.362063', "state" = 'installed' WHERE "node_configs"."id" = 2
^[[1m^[[36mSQL (0.1ms)^[[0m ^[[1mSELECT 1 FROM "proposals" WHERE ("proposals"."name" = 'Glance 2') AND ("proposals".id <> 2) LIMIT 1^[[0m
^[[1m^[[35mAREL (0.2ms)^[[0m UPDATE "proposals" SET "updated_at" = '2012-08-27 05:56:06.482153', "state" = 'installed' WHERE "proposals"."id" = 2

Started GET "/proposals" for 10.239.37.23 at Mon Aug 27 01:56:06 -0400 2012
Processing by ProposalsController#index as
^[[1m^[[36mProposal Load (0.5ms)^[[0m ^[[1mSELECT "proposals".* FROM "proposals"^[[0m
^[[1m^[[35mSoftware Load (0.2ms)^[[0m SELECT "softwares".* FROM "softwares" WHERE "softwares"."id" = 4 LIMIT 1
Rendered layouts/_menu.html.erb (2.1ms)
Rendered proposals/index.html.erb within layouts/application (32.6ms)
Completed 200 OK in 119ms (Views: 35.1ms | ActiveRecord: 0.7ms)

Test openstack folsom compute - Ubuntu failed

err: /Stage[main]/Nova_and_quantum_f::Nova_api::Test/File[/var/lib/nova/image_kvm.tgz]: Could not evaluate: Could not retrieve information from environment production source(s) puppet:///modules/nova_and_quantum_f/image_kvm.tgz at /etc/puppet/modules/nova_and_quantum_f/manifests/nova_api/test.pp:12
notice: /Stage[main]/Nova_and_quantum_f::Nova_api::Test/File[/var/lib/nova/test.sh]/ensure: defined content as '{md5}291a1859782899e70b4ca1cbc883dd6e'
notice: /Stage[main]/Nova_and_quantum_f::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 10.0.0.0/24 2>&1]: Dependency File[/var/lib/nova/image_kvm.tgz] has failures: true
warning: /Stage[main]/Nova_and_quantum_f::Nova_api::Test/Exec[/var/lib/nova/test.sh image_kvm.tgz 10.0.0.0/24 2>&1]: Skipping because of failed dependencies
notice: Finished catalog run in 0.16 seconds

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.