Git Product home page Git Product logo

collective.hostout's Introduction

Hostout - standardised deployment of buildout based applications with Fabric

Hostout gives you:

  • the ability to configure your Fabric commands from within buildout
  • a framework for integrating different Fabric scripts via setup tools packages
  • an out of the box deployment command for buildout based applications
  • plugins to integrate deployment further such as hostout.supervisor and hostout.cloud

Overview

Hostout is a framework for managing remote buildouts via fabric scripts. It includes many helpful built-in commands to package, deploy and bootstrap a remote server based on your local buildout.

Hostout is built around two ideas:

  1. Sharing your deployment configuration for an application in the same buildout you share with your developers in a team, so where and how your applications is deployed is automated rather than documentation. Deployment then becomes a simple command by any member of the team.
  2. Sharing fabric scripts via PyPI so we don't have to reinvent ways to deploy or manage hosted applications.

If you are already a user of Fabric and buildout but aren't interested in the hostout's built-in ability to deploy, then skip ahead to Integrating Fabric into buildout.

You don't need to learn Fabric to use hostout but you will need to learn buildout. The good news is that many buildouts and snippets already exist for building django, pylons, pyramid, plone, zope, varnish, apache, haproxy or whichever server-side technology you want to deploy.

To Contribute

To contribute:

Hostout deploy

Hostout deploy is a built-in Fabric command that packages your buildout and any development eggs you might have, copies them to the server, prepares the server to run and then runs buildout remotely for you. This makes it simple to deploy your application.

Development buildout

For example, let's say we had the worlds simplest wsgi application. You can use paster to create the package. Go to src and type:

../bin/paster create -t basic_package hellowsgi version=0.1
    description="testing hostout" long_description="" keywords=""
    author="" author_email="" url="" license_name="" zip_safe=False

Then edit src/hellowsgi/hellowsgi/__init__.py as follow:

from webob import Request, Response

def main(global_config, **local_conf):
    return MainApplication()

class MainApplication(object):
    """An endpoint"""

    def __call__(self, environ, start_response):
        request = Request(environ)
        response = Response("Powered by collective.hostout!")
        return response(environ, start_response)

Then edit src/hellowsgi/setup.py and update entry_points as follow:

[...]
entry_points="""
      # -*- Entry points: -*-
      [paste.app_factory]
      main = hellowsgi:main
      """,
[...]

We will create a buildout configuration file called base.cfg :

[buildout]
parts = demo pasterini
develop =
  src/hellowsgi

[demo]
recipe=zc.recipe.egg
eggs =
    PasteScript
    webob
    hellowsgi

[pasterini]
recipe = collective.recipe.template
output = parts/demo/paster.ini
port = 8080
input = inline:
    [server:main]
    use = egg:Paste#http
    host = 0.0.0.0
    port = ${:port}

    [pipeline:main]
    pipeline = app

    [app:app]
    use = egg:hellowsgi#main

Once we bootstrap and build this:

$ python bootstrap.py -c base.cfg
$ bin/buildout -c base.cfg

we have a working wsgi app if you run :

$ bin/paster serve parts/demo/paster.ini

Production buildout

Next you will create a "production buildout" which extends your base.cfg. This might contain parts to install webservers, databases, caching servers etc.

Our prod.cfg is very simple :

[buildout]
extends = base.cfg
parts += supervisor

[supervisor]
recipe=collective.recipe.supervisor
programs=
  10 demo ${buildout:directory}/bin/paster [serve ${pasterini:output}] ${buildout:directory} true

[pasterini]
port = 80

Deployment buildout

Now create a third buildout file, called buildout.cfg. This will be our development/deployment buildout :

[buildout]
extends = base.cfg
parts += host1

[host1]
recipe = collective.hostout
host = myhost.com
hostos = ubuntu
user = myusername
path = /var/buildout/demo
buildout = prod.cfg
post-commands = bin/supervisord
python-version = 2.6
buildout-group = mygroupname

This buildout part will install a script which will deploy prod.cfg along with hellowsgi to the remote path /var/buildout/demo on our server myhost.com :

$ bin/buildout
Develop: '.../src/hellowsgi'
Uninstalling host1.
Installing demo.
Installing host1.

As part of the buildout process, hostout will automatically save the versions of all the eggs in your development buildout in a file called hostoutversions.cfg and will pin them for you during deployment. This ensures that the production buildout will be running the same software as you have tested locally. Remember to manually version pin any additional eggs you use in your prod.cfg as these will not be pinned for you.

Running hostout deploy for the first time

The bin/hostout command takes three kinds of parameters :

hostout [hostname(s)] [commands] [command arguments]

in our case we will run :

$ bin/hostout host1 deploy

The first thing this command will do, is to ask you your password and attempt to login in to your server. It will then look for /var/buildout/demo/bin/buildout and when it doesn't find it it will automatically run another hostout command called bootstrap.

Bootstrap is further broken down into three commands: bootstrap_users, bootstrap_python and bootstrap_buildout. These will create an additional buildout-user to build and run your application, install basic system packages needed to run buildout, and install buildout into your remote path. It will attempt to detect which version of linux your server is running to find the system python, but if this fails it will attempt to compile python from source. The version of python used will match the major version of python which your development buildout uses.

Deploying and re-deploying

Once hostout bootstrap has ensured a working remote buildout, deployment will continue by running the following commands:

  1. uploadeggs: Any develop eggs are released as eggs and uploaded to the server. These will be uploaded directly into the buildout's buildout-cache/downloads/dist directory which buildout uses to find packages before looking up the package index.

    It's very important the packages under development work when packaged, i.e. are capable of being packaged via python setup.py sdist. A common mistake is to rely on setuptools to automatically detect which files should be included but not having the correct setuptools SCM helpers installed if you are using git or hg; e.g. for git do easy_install setuptools-git. This will also upload a pinned.cfg which contains the generated version numbers for the packages under development that have been uploaded.

  2. uploadbuildout: The relevant .cfg files and any files/directories in the include parameter are synced to the remote server.
  3. buildout: Upload a final pinned.cfg which includes the generated development package versions pins and all the versions of all the dependencies of the development buildout from where the system is being deployed from. These discovered pinned versions are recorded during the local buildout process by the hostout recipe and recorded in a local hostoutversions.cfg file. Buildout is then run on the remote production buildout.

    If you continue to develop your application you can run hostout deploy each time and it will only upload the eggs that have changed and buildout will only reinstall changed parts of the buildout.

In our example above, deployment would look something like this :

$ bin/hostout host1 deploy
running clean
...
creating '...example-0.0.0dev_....egg' and adding '...' to it
...
Hostout: Running command 'predeploy' from 'collective.hostout'
...
Hostout: Running command 'uploadeggs' from 'collective.hostout'
Hostout: Preparing eggs for transport
Hostout: Develop egg src/demo changed. Releasing with hash ...
Hostout: Eggs to transport:
    demo = 0.0.0dev-...
...
Hostout: Running command 'uploadbuildout' from 'collective.hostout'
...
Hostout: Running command 'buildout' from 'collective/hostout'
...
Hostout: Running command 'postdeploy' from 'collective/hostout'
...

Now if you visit myhost.com you will see your web application shared with the world.

Hostout docker

Hostout also integrates with docker to help build custom docker images based on a local buildout.

First add your hostout config into your local buildout in order to generate your hostout configuration :

[hostout]
recipe = collective.hostout
eggs =
  collective.hostout[docker]
extends =
  hostout.supervisor
versionsfile=hostoutversions.cfg
include =
hostos=ubuntu


[app]
<=
    hostout
extends =
buildout =
    buildout.cfg
parts =
    instance1
post-commands = ./bin/instance1 fg

[db]
<=
    hostout
extends =
buildout =
    buildout.cfg
parts =
    zeo
post-commands = ./bin/zeo fg

Now you can rerun buildout and then generate your docker image :

$ bin/buildout -c dockerplone_devel.cfg
$ bin/hostout app db docker

This will use docker apis to generate two images, `hostout/app and hostout/db`. It works similar to buildout in that if you buildout doesn't complete then you can rerun the docker hostout command again and it will continue where it left off. If you used your own DockerFile you would have to ensure your buildout didn't fail as it will roll back the whole buildout and you will start a new buildout each time you retry.

The image created will use `post-commands hostout configuration to start your process. Before startup it will also rerun your buildout in offline mode. This allows your buildout to rewrite itself using environment variables. For the following buildout allows a zope instance to be dynamically reconfigred to connect to the zeo server using gocept.recipe.env` :

[env]
recipe = gocept.recipe.env
# set defaults
ZEO_PORT_8100_TCP_ADDR = 0.0.0.0
ZEO_PORT_8100_TCP_PORT = 8100

[instance1]
recipe = plone.recipe.zope2instance
http-address = 0.0.0.0:8080
user=admin:admin
zeo-client = on
zeo-address =  ${env:ZEO_PORT_8100_TCP_ADDR}:${env:ZEO_PORT_8100_TCP_PORT}
shared-blob = off

[zeo]
recipe = plone.recipe.zeoserver
zeo-address = 0.0.0.0:8100
zeo-var = ${buildout:directory}/var
blob-storage = ${zeo:zeo-var}/blobstorage

If we use the following docker-compose.yml then the link will set an env variable for the exposed port on the zeo server. This will then override the zope2 instance during the startup buildout. :

app:
  image: hostout/app
  ports:
   - "8080"
  volumes_from: app_var
  links:
   - db:zeo

db:
  image: hostout/db
  expose:
   - "8100"
  volumes_from: db_var


app_var:
  image: hostout/app # saves space and gets permissions right
  command: /bin/true # don't want hostout command to run
  volumes:
   - /var/buildout/app/var

db_var:
  image: hostout/db # saves space and gets permissions right
  command: /bin/true # don't want zeo to run
  volumes:
   - /var/buildout/db/var

Other built-in Commands

Hostout comes with a set of helpful commands. You can show this list by not specifying any command at all. The list of commands will vary depending on what fabfiles your hostout references. :

$ bin/hostout host1
cmdline is: bin/hostout host1 [host2...] [all] cmd1 [cmd2...] [arg1 arg2...]
Valid commands are:
  bootstrap        : Install python and users needed to run buildout
  bootstrap_python :
  bootstrap_users  : create buildout and the effective user and allow hostout access
  buildout         : Run the buildout on the remote server
  deploy           : predeploy, uploadeggs, uploadbuildout, buildout and then postdeploy
  postdeploy       : Perform any final plugin tasks
  predeploy        : Install buildout and its dependencies if needed. Hookpoint for plugins
  setowners        : Ensure ownership and permissions are correct on buildout and cache
  run              : Execute cmd on remote as login user
  sudo             : Execute cmd on remote as root user
  uploadbuildout   : Upload buildout pinned to local picked versions + uploaded eggs
  uploadeggs       : Any develop eggs are released as eggs and uploaded to the server

The run command is helpful to run quick remote commands as the buildout user on the remote host :

$> bin/hostout host1 run pwd
Hostout: Running command 'run' from collective.hostout
Logging into the following hosts as root:
    127.0.0.1
[127.0.0.1] run: sh -c "cd /var/host1 && pwd"
[127.0.0.1] out: ...
Done.

We can also use our login user and password to run quick sudo commands :

$> bin/hostout host1 sudo cat /etc/hosts
Hostout: Running command 'sudo' from collective.hostout
Logging into the following hosts as root:
    127.0.0.1
[127.0.0.1] run: sh -c "cd /var/host1 && cat/etc/hosts"
[127.0.0.1] out: ...
Done.

Detailed Hostout Options

Basic Options

host

the IP or hostname of the host to deploy to. by default it will connect to port 22 using ssh. You can override the port by using hostname:port

user

The user as which hostout will attempt to login to your host. Will read a user's ssh config to get a default.

password

The password for the login user. If not given then hostout will ask each time.

identity-file

A public key for the login user.

extends

Specifies another part which contains defaults for this hostout.

fabfiles

Path to fabric files that contain commands which can then be called from the hostout script. Commands can access hostout options via hostout.options from the fabric environment.

Deploy options

buildout

The configuration file you wish to build on the remote host. Note that this doesn't have to be the same .cfg as the hostout section is in, but the versions of the eggs will be determined from the buildout with the hostout section in. Defaults to buildout.cfg.

path

The absolute path on the remote host where the buildout will be created. Defaults to '/var/buildout/%s'%name, where name is the name of the part which defines this host.

pre-commands

A series of shell commands executed as root before the buildout is run. You can use this to shut down your application, or to help prepare the environment for buildout. If these commands fail they will be ignored.

post-commands

A series of shell commands executed as root after the buildout is run. You can use this to startup your application. If these commands fail they will be ignored.

sudo-parts

Buildout parts which will be installed after the main buildout has been run. These will be run as root.

parts

Runs the buildout with a parts value equal to this.

include

Additional configuration files or directories needed to run this buildout.

buildout-cache

If you want to override the default location for the buildout-cache on the host.

python-version

The version of python to install during bootstrapping. (Mandatory.)

hostos

Over which platform specific bootstrap_python command is called. For instance if hostos=redhat, bootstrap_python_redhat will be called to use "yum" to install python and other developer tools. This paramter is also used in hostout.cloud to pick which VM to create.

Users and logins

The bootstrap_users command is called as part of the bootstrap process which is called if no buildout has already been bootstrapped on the remote server. This command will login using "user" (the user should have sudo rights) and create two additional users and a group which joins them.

effective-user

This user will own the buildouts var files. This allows the application to write to database files in the var directory but not be allowed to write to any other part of the buildout code.

buildout-user

The user which will own the buildout files. During bootstrap this user will be created and be given a ssh key such that hostout can login and run buildout using this account.

buildout-group

A group which will own the buildout files including the var files. This group is created if needed in the bootstrap_users command. (Mandatory.)

In addition the private key will be read from the location identity_file and be used to create a passwordless login for the buildout-user account by copying the public key into the authorized_keys file of the buildout_user account. If no file exists for identity_file a DSA private key is created for you in the file ${hostname}_key in the buildout directory. During a normal deployment all steps are run as the buildout-user so there is no need to use the user account and therefore supply a password. The exception to this is if you specify pre-deploy, post-deploy or sudo-parts steps or have to bootstrap the server. These require the use of the sudo-capable user account. If you'd like to share the ability to deploy your application with others, one way to do this is to simply checkin the private key file specified by identity_file along with your buildout. If you do share deployment, remember to pin your eggs in your buildout so the result is consistent no matter where it is deployed from. One trick you can use to achieve this is to add hostoutversions.cfg to the extends of your buildout and commit hostoutversions.cfg to your source control as well.

Integrating Fabric into buildout

Hostout uses fabric files. Fabric is an easy way to write python that calls commands on a host over ssh.

Here is a basic fabfile which will echo two variables on the remote server.

>>> write('fabfile.py',""" ... ... from fabric import api ... from fabric.api import run ... ... def echo(cmdline1): ... option1 = api.env.option1 ... run("echo '%s %s'" % (option1, cmdline1) ) ... ... """)

Using hostout we can predefine some of the fabric scripts parameters as well as install the fabric runner. Each hostout part in your buildout.cfg represents a connection to a server at a given path.

>>> write('buildout.cfg', ... """ ... [buildout] ... parts = host1 ... ... [host1] ... recipe = collective.hostout ... host = 127.0.0.1:10022 ... fabfiles = fabfile.py ... option1 = buildout ... user = root ... password = root ... path = /var/host1 ... ... """ )

If you don't include your password you will be prompted for it later.

When we run buildout a special fabric runner will be installed called bin/hostout

>>> print system('bin/buildout -N') Installing host1. Generated script '/sample-buildout/bin/hostout'.

>>> print system('bin/hostout') cmdline is: bin/hostout host1 [host2...] [all] cmd1 [cmd2...] [arg1 arg2...] Valid hosts are: host1

We can run our fabfile by providing the

  • host which refers to the part name in buildout.cfg,
  • command which refers to the method name in the fabfile,
  • any other options we want to pass to the command.

Note: We can run multiple commands on one or more hosts using a single commandline.

In our example

>>> print system('bin/hostout host1 echo "is cool"') Hostout: Running command 'echo' from 'fabfile.py' Logging into the following hosts as root: 127.0.0.1 [127.0.0.1] run: echo 'cd /var/host1 && buildout is cool' [127.0.0.1] out: ... Done.

Note that we combined information from our buildout with commandline paramaters to determine the exact command sent to our server.

Making a hostout plugin

It can be very helpful to package up our fabfiles for others to use.

Hostout Plugins are eggs with three parts:

  1. Fabric script
  2. A zc.buildout recipe to initialise the parameters of the fabric file commands
  3. Entry points for both the recipe and the fabric scripts

>>> entry_points = {'zc.buildout': ['default = hostout.myplugin:Recipe',], ... 'fabric': ['fabfile = hostout.myplugin.fabfile'] ... },

Once packaged and released others can add your plugin to their hostout e.g.

>>> write('buildout.cfg', ... """ ... [buildout] ... parts = host1 ... ... [host1] ... recipe = collective.hostout ... extends = hostout.myplugin ... param1 = blah ... """ )

>>> print system('bin/buildout')

>>> print system('bin/hostout host1') cmdline is: bin/hostout host1 [host2...] [all] cmd1 [cmd2...] [arg1 arg2...] Valid commands are: ... mycommand : example of command from hostout.myplugin

Your fabfile can get access parameters passed in the commandline by defining them in your function; e.g. :

def mycommand(cmdline_param1, cmdline_param2):
    pass

Your fabfile commands can override any of the standard hostout commands. For instance if you which your plugin to hook into the predeploy process then just add a predeploy function to your fabfile.py :

def predeploy():
   api.env.superfun()

It is important when overridding to call the "superfun" function so any overridden functions are also called.

You can also call any other hostout functions from your command :

def mycommand():
  api.env.hostout.deploy()

The options set in the buildout part are available via the Fabric api.env variable and also via api.env.hostout.options.

Using fabric plugins

You use commands others have made via the extends option. Name a buildout recipe egg in the extends option and buildout will download and merge any fabfiles and other configuration options from that recipe into your current hostout configuration. The following are examples of built-in plugins. Others are available on pypi.

hostout.cloud

Will create VM instances automatically for you on many popular hosting services such as Amazon, Rackspace and Slicehost

hostout.supervisor

Will stop a supervisor before buildout is run and restart it afterwards. It provides some short commands to quickly manage your applications from your hostout commandline.

Why hostout

Managing multiple environments can be a real pain and a barrier to development. Hostout puts all of the settings for all of your environments in an easy-to-manage format.

Compared to

SilverLining

Hostout allows you to deploy many different kinds of applications instead of just wsgi-based python apps. Buildout lets you define the installation of almost any kind of application.

Puppet

TODO

mr.awesome

TODO

Fabric

TODO

Egg Proxies

TODO

Using hostout with a python2.4 buildout

Hostout itself requires python 2.6. However it is possible to use hostout with a buildout that requires python 2.4 by using buildout's support for different python interpreters.

>>> write('buildout.cfg', ... """ ... [buildout] ... parts = host1 ... ... [host1] ... recipe = collective.hostout ... host = 127.0.0.1:10022 ... python = python26 ... ... [python26] ... executable = /path/to/your/python2.6/binary ... ... """ )

or alternatively if you don't want to use your local python you can get buildout to build it for you.

>>> write('buildout.cfg', ... """ ... [buildout] ... parts = host1 ... ... [host1] ... recipe = collective.hostout ... host = 127.0.0.1:10022 ... python = python26 ... ... [python26] ... recipe = zc.recipe.cmmi ... url = http://www.python.org/ftp/python/2.6.1/Python-2.6.1.tgz ... executable = ${buildout:directory}/parts/python/bin/python2.6 ... extra_options= ... --enable-unicode=ucs4 ... --with-threads ... --with-readline ... ... """ )

Credits

Dylan Jay ( software at pretaweb dot com )

collective.hostout's People

Contributors

djay avatar duffyd avatar fulv avatar instification avatar ivanteoh avatar jean avatar pbauer avatar pigeonflight avatar simahawk avatar tomster avatar vincic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

collective.hostout's Issues

Deployed instance doesn't start up

./bin/hostout deploy plone completes successfully, but starting up the
deployed clients fail. Traceback below. Starting up ./bin/zeoserver start
is successful.

vagrant@natty:~/plone$ ./bin/client1 fg
Traceback (most recent call last):
  File "./bin/client1", line 252, in <module>
    + sys.argv[1:])
  File "/home/vagrant/plone/buildout-cache/vagrant/eggs/plone.recipe.zope2instance-3.6-py2.6.egg/plone/recipe/zope2instance/ctl.py", line 321, in main
    options.realize(args, doc=__doc__)
  File "/home/vagrant/plone/parts/zope2/lib/python/Zope2/Startup/zopectl.py", line 95, in realize
    ZDOptions.realize(self, *args, **kw)
  File "/home/vagrant/plone/parts/zope2/lib/python/zdaemon/zdoptions.py", line 273, in realize
    self.load_schema()
  File "/home/vagrant/plone/parts/zope2/lib/python/zdaemon/zdoptions.py", line 321, in load_schema
    self.schema = ZConfig.loadSchema(self.schemafile)
  File "/home/vagrant/plone/parts/zope2/lib/python/ZConfig/loader.py", line 31, in loadSchema
    return SchemaLoader().loadURL(url)
  File "/home/vagrant/plone/parts/zope2/lib/python/ZConfig/loader.py", line 65, in loadURL
    return self.loadResource(r)
  File "/home/vagrant/plone/parts/zope2/lib/python/ZConfig/loader.py", line 159, in loadResource
    schema = ZConfig.schema.parseResource(resource, self)
  File "/home/vagrant/plone/parts/zope2/lib/python/ZConfig/schema.py", line 27, in parseResource
    xml.sax.parse(resource.file, parser)
  File "/usr/lib/python2.6/xml/sax/__init__.py", line 33, in parse
    parser.parse(source)
  File "/usr/lib/python2.6/xml/sax/expatreader.py", line 107, in parse
    xmlreader.IncrementalParser.parse(self, source)
  File "/usr/lib/python2.6/xml/sax/xmlreader.py", line 123, in parse
    self.feed(buffer) 
  File "/usr/lib/python2.6/xml/sax/expatreader.py", line 207, in feed
    self._parser.Parse(data, isFinal)
  File "/usr/lib/python2.6/xml/sax/expatreader.py", line 301, in start_element
    self._cont_handler.startElement(name, AttributesImpl(attrs))
  File "/home/vagrant/plone/parts/zope2/lib/python/ZConfig/schema.py", line 99, in startElement
    getattr(self, "start_" + name)(attrs)
  File "/home/vagrant/plone/parts/zope2/lib/python/ZConfig/schema.py", line 475, in start_schema
    keytype, valuetype, datatype = self.get_sect_typeinfo(attrs)
  File "/home/vagrant/plone/parts/zope2/lib/python/ZConfig/schema.py", line 201, in get_sect_typeinfo 
    datatype = self.get_datatype(attrs, "datatype", "null", base)
  File "/home/vagrant/plone/parts/zope2/lib/python/ZConfig/schema.py", line 194, in get_datatype
    return self._registry.get(dtname)
  File "/home/vagrant/plone/parts/zope2/lib/python/ZConfig/datatypes.py", line 398, in get
    t = self.search(name)
  File "/home/vagrant/plone/parts/zope2/lib/python/ZConfig/datatypes.py", line 423, in search
    package = __import__(n, g, g, component)
  File "/home/vagrant/plone/parts/zope2/lib/python/Zope2/Startup/datatypes.py", line 21, in <module>  
    import OFS.Uninstalled
  File "/home/vagrant/plone/parts/zope2/lib/python/OFS/Uninstalled.py", line 16, in <module>
    import  SimpleItem, Globals, Acquisition
  File "/home/vagrant/plone/parts/zope2/lib/python/OFS/SimpleItem.py", line 26, in <module>
    import AccessControl.Role, AccessControl.Owned, App.Common
  File "/home/vagrant/plone/parts/zope2/lib/python/AccessControl/__init__.py", line 17, in <module>   
    from Implementation import setImplementation
  File "/home/vagrant/plone/parts/zope2/lib/python/AccessControl/Implementation.py", line 98, in <module>
    setImplementation("C")
  File "/home/vagrant/plone/parts/zope2/lib/python/AccessControl/Implementation.py", line 51, in setImplementation
    from AccessControl import ImplC as impl
  File "/home/vagrant/plone/parts/zope2/lib/python/AccessControl/ImplC.py", line 18, in <module>
    from cAccessControl import rolesForPermissionOn, \
  File "/home/vagrant/plone/parts/zope2/lib/python/AccessControl/SimpleObjectPolicies.py", line 82, in <module>
    from DocumentTemplate.DT_Util import TemplateDict
  File "/home/vagrant/plone/parts/zope2/lib/python/DocumentTemplate/__init__.py", line 21, in <module>
    from DocumentTemplate import String, File, HTML, HTMLDefault, HTMLFile
  File "/home/vagrant/plone/parts/zope2/lib/python/DocumentTemplate/DocumentTemplate.py", line 112, in <module>
    from DT_String import String, File
  File "/home/vagrant/plone/parts/zope2/lib/python/DocumentTemplate/DT_String.py", line 19, in <module>
    from DT_Util import ParseError, InstanceDict, TemplateDict, render_blocks, str
  File "/home/vagrant/plone/parts/zope2/lib/python/DocumentTemplate/DT_Util.py", line 19, in <module> 
    from html_quote import html_quote, ustr # for import by other modules, dont remove!
  File "/home/vagrant/plone/parts/zope2/lib/python/DocumentTemplate/html_quote.py", line 4, in <module>
    from ustr import ustr
  File "/home/vagrant/plone/parts/zope2/lib/python/DocumentTemplate/ustr.py", line 18, in <module>
    nasty_exception_str = Exception.__str__.im_func
AttributeError: 'wrapper_descriptor' object has no attribute 'im_func'

Deploy to Ubuntu 10.10 fails

I'm not pretty sure if it's the right place to ask for help, but here I go:

I'm trying to use collective.hostout to a Ubuntu 10.10 machine. The deploy step fails because some part of hostout wants to install python-elementtree and python-elementtree, which, no longer exists in ubuntu 10.10.

These two packages exist instead:
v python2.6-celementtree
v python2.6-elementtree

So, How do we deploy to ubuntu server 10.10? Is there a way to patch something?

Still problems while trying to deploy to ubuntu server

On the README file it says that all I need to do on the first time is to run bin/hostout sitename bootstrap. All I get is this:

$ bin/hostout mysite deploy
/home/tzicatl/Aplicaciones/Buildout/egss/pycrypto-2.3-py2.6-linux-x86_64.egg/Crypto/Util/randpool.py:40: RandomPool_DeprecationWarning: This application uses RandomPool, which is BROKEN in older releases.  See http://www.pycrypto.org/randpool-broken
RandomPool_DeprecationWarning)
Hostout: Running command 'deploy' from 'collective.hostout.fabfile'
Hostout: Running command 'predeploy' from 'hostout.supervisor.fabfile'
[[email protected]:22] sudo: sudo -S -p 'sudo password:'  /bin/bash -l -c "/opt/mysite_libros/bin/supervisorctl shutdown || echo 'Failed to shutdown'"
[[email protected]:22] err: /bin/bash: /opt/mysite_libros/bin/supervisorctl: No such file or directory
[[email protected]:22] out: Failed to shutdown
Hostout: Running command 'uploadeggs' from 'collective.hostout.fabfile'
[[email protected]:22] run: /bin/bash -l -c "ls /opt/buildout-cache/downloads/dist"
[[email protected]:22] err: ls: cannot access /opt/buildout-cache/downloads/dist: No such file or directory

Fatal error: run() encountered an error (return code 2) while executing 'ls /opt/buildout-cache/downloads/dist'

None


Aborting.

After that failed attempt I tryied the following commands:

bin/hostout mysite bootstrap_buildout_ubuntu
bin/hostout mysite bootstrap_python_ubuntu
bin/hostout mysite bootstrap_users
bin/hostout mysite buildout
bin/hostout mysite deploy

buildout and deploy commands fail.
For buildout the errors are:

$ bin/hostout mysite buildout
/home/tzicatl/Aplicaciones/Buildout/egss/pycrypto-2.3-py2.6-linux-x86_64.egg/Crypto/Util/randpool.py:40: RandomPool_DeprecationWarning: This application uses RandomPool, which is BROKEN in older releases.  See http://www.pycrypto.org/randpool-broken
  RandomPool_DeprecationWarning)
Hostout: Running command 'buildout' from 'collective.hostout.fabfile'
[[email protected]:22] put: /tmp/tmpUkJmAq -> /opt/mysite_libros/mysite-GqX47IgVCyxDrafXPyC2ew.cfg

Fatal error: put() encountered an exception while uploading '/tmp/tmpUkJmAq'

Traceback (most recent call last):
  File "/home/tzicatl/Aplicaciones/Buildout/egss/Fabric-0.9.4-py2.6.egg/fabric/operations.py", line 303, in put
    rattrs = ftp.put(lpath, _remote_path)
  File "/home/tzicatl/Aplicaciones/Buildout/egss/paramiko-1.7.6-py2.6.egg/paramiko/sftp_client.py", line 561, in put
    fr = self.file(remotepath, 'wb')
  File "/home/tzicatl/Aplicaciones/Buildout/egss/paramiko-1.7.6-py2.6.egg/paramiko/sftp_client.py", line 245, in open
    t, msg = self._request(CMD_OPEN, filename, imode, attrblock)
  File "/home/tzicatl/Aplicaciones/Buildout/egss/paramiko-1.7.6-py2.6.egg/paramiko/sftp_client.py", line 628, in _request
    return self._read_response(num)
  File "/home/tzicatl/Aplicaciones/Buildout/egss/paramiko-1.7.6-py2.6.egg/paramiko/sftp_client.py", line 675, in _read_response
    self._convert_status(msg)
  File "/home/tzicatl/Aplicaciones/Buildout/egss/paramiko-1.7.6-py2.6.egg/paramiko/sftp_client.py", line 701, in _convert_status
    raise IOError(errno.ENOENT, text)
IOError: [Errno 2] No such file


Aborting.

And for deploy the errors are the same as the first run.

I'm running collective.hostout from this git repo. What am I doing wrong?

bootstrap not called automaticaly before deploy

From docs
**
The first thing this command will do, is to ask you your password and attempt to login in to your server. It will then look for /var/buildout/demo/bin/buildout and when it doesn't find it it will automatically run another hostout command called bootstrap.
**

in fact it is not so!
why?

fabfile can't find buildout-cache

./bin/hostout varnish deploy dies as follows:

[[email protected]:2222] out: ls: cannot access /home/vagrant/varnish/buildout-cache/vagrant/downloads/dist: No such file or directory

On the host I see:

vagrant@natty:~$ ls varnish/buildout-cache/
downloads  eggs  extends

It looks as if it first creates varnish/buildout-cache/downloads and subsequently looks for varnish/buildout-cache/vagrant/downloads/.

This is using hostout trunk as of 36907f1c3d843aa9c4c247f6670f14783853b603.

Integrate vagrant into hostout.cloud

Use vagrant to make it easy for users to locally test their deployments. Would work the same as any other cloud provider. Could be used as part of test suite.

`bootstrap_python` tries to `cd` into a nonexistent directory

The line runescalatable('mkdir -p %s' % prefix) (source link) generates a bash command like:

/bin/bash -l -c "cd /path/to/instance/ && PATH=\$PATH:\"/path/to/instance/\" export HTTP_PROXY=\"http://127.0.0.1:7001\" &&  mkdir -p /path/to/instance/python"

This fails becase at this point /path/to/instance doesn't exist yet.

Become more Fabric like

  1. Hostout doesn't need to be installed by buildout to do it's job.
  2. If hostout didn't hack fabric and you could use hostout with fabric directly then it might make it more understandable what hostout does and lead to more innovation with deployment.
  3. It might also be good if hostout could act in a single file solution to bootstrap the local machine as well as remote machines.

For these reasons I'm considering refactoring hostout. Here are some ideas on how to do this

  • perhaps make hostout a Fabric namespace, so you make a local project fabfile.py. Use "import hostout; api.env.hostoutconfig='hostout.cfg'". Then do $ fab hostout.deploy.
  • a place to read in a hostout config file before a command is run and make that info available in the env. This could be done via a "with hostoutenv():" type statement. Maybe a decorator could work too. It would need to set the host, user etc for the task.
  • a way to specify on the commandline which buildout to deploy to. Fabric uses hosts and roles. whereas buildout uses a buildout which is a host/path/user. We could use symbolic hostnames that map to a host:path or perhaps use roles? Roles would mean a commandline like "$ fab -R app01prd,web01dev hostout.deploy". The hostoutenv would then turn that into the correct host, path, username etc settings.
  • a replacement for how the plugin system works. At the moment the hostout runner will load several fabfiles and combine the commands list into one, with commands overridden commands being called via a special getattr from the higher level plugins. Perhaps a simpler system is needed. One issue with the current system is that the order of execution is determined by the order of plugins specified in buildout. Perhaps its a more explicit register hooks step into bootstrap, pre/post-deploy etc.
  • refactor the buildout recipe so it does the above. Create a hostout cfg file, a base fabfile.py and install bin/fab. Potentially it could also install it's own bin/hostout to be backwards compatible.

Intercept `cmmi` requests

To avoid re-downloading distributions from the internet during deployment, intercept cmmi requests and return the cached copy from the source machine.

Implementation:

  • hostout already provides and http_proxy to proxy requests via the source machine.
  • Inside http_proxy, check the urls requested,
  • then use buildout to grab the file via its cache and return it.

Don't always apt-get update

Is there a way to check if we're up to date? api.sudo('apt-get update') takes quite a while on a slow link.

Compiled python lacks zlib

I had to edit Python-2.4/Modules/Setup before zlib support was finally compiled successfully:

--- Python-2.4/Modules/Setup    2012-02-28 09:11:10.313091762 +0000
+++ Python-2.4-with-zlib/Modules/Setup  2012-02-28 08:52:28.121091224 +0000
@@ -443,6 +443,7 @@
 # This require zlib 1.1.3 (or later).
 # See http://www.gzip.org/zlib/
 #zlib zlibmodule.c -I$(prefix)/include -L$(exec_prefix)/lib -lz
+zlib zlibmodule.c -I$(prefix)/include -L$(exec_prefix)/lib -lz

Use https://github.com/collective/buildout.python to do the compilation instead?

ImportError during `bootstrap_python`

When Python packages are not available, we try to build it.

Traceback follows:

[[email protected]:2222] run: /bin/bash -l -c "cd /home/vagrant/plone/ && PATH=\$PATH:\"/home/vagrant/plone/\" export HTTP_PROXY=\"http://127.0.0.1:7001\" &&  mkdir -p /home/vagrant/plone/python"
Traceback (most recent call last):
  File "./bin/hostout", line 18, in <module>
    collective.hostout.hostout.main('/home/user/vagrant/dev/pims-dev/parts/hostout/hostout.cfg',sys.argv[1:])
  File "/home/user/vagrant/dev/pims-dev/src/collective.hostout/collective/hostout/hostout.py", line 725, in main
    res = hostout.runcommand(cmd, *cmdargs)
  File "/home/user/vagrant/dev/pims-dev/src/collective.hostout/collective/hostout/hostout.py", line 450, in runcommand
    res = superfun(funcs, *cmdargs, **vargs)
  File "/home/user/vagrant/dev/pims-dev/src/collective.hostout/collective/hostout/hostout.py", line 445, in superfun
    res = func(*cmdargs, **vargs)
  File "/home/user/vagrant/dev/pims-dev/src/collective.hostout/collective/hostout/fabfile.py", line 304, in bootstrap
    cmd()
  File "/home/user/vagrant/dev/pims-dev/src/collective.hostout/collective/hostout/hostout.py", line 460, in run
    return self.runcommand(name, *args, **vargs)
  File "/home/user/vagrant/dev/pims-dev/src/collective.hostout/collective/hostout/hostout.py", line 450, in runcommand
    res = superfun(funcs, *cmdargs, **vargs)
  File "/home/user/vagrant/dev/pims-dev/src/collective.hostout/collective/hostout/hostout.py", line 445, in superfun
    res = func(*cmdargs, **vargs)
  File "/home/user/vagrant/dev/pims-dev/src/collective.hostout/collective/hostout/fabfile.py", line 635, in bootstrap_python_ubuntu
    hostout.bootstrap_python()
  File "/home/user/vagrant/dev/pims-dev/src/collective.hostout/collective/hostout/hostout.py", line 460, in run
    return self.runcommand(name, *args, **vargs)
  File "/home/user/vagrant/dev/pims-dev/src/collective.hostout/collective/hostout/hostout.py", line 450, in runcommand
    res = superfun(funcs, *cmdargs, **vargs)
  File "/home/user/vagrant/dev/pims-dev/src/collective.hostout/collective/hostout/hostout.py", line 445, in superfun
    res = func(*cmdargs, **vargs)
  File "/home/user/vagrant/dev/pims-dev/src/collective.hostout/collective/hostout/fabfile.py", line 588, in bootstrap_python
    api.run("rm -rf /tmp/Python-%(version)s"%d)
  File "/usr/lib/python2.6/contextlib.py", line 34, in __exit__
    self.gen.throw(type, value, traceback)
  File "/home/user/.buildout/eggs/Fabric-1.3.4-py2.6.egg/fabric/context_managers.py", line 96, in _setenv
    yield
  File "/home/user/vagrant/dev/pims-dev/src/collective.hostout/collective/hostout/fabfile.py", line 579, in bootstrap_python
    get_url('http://python.org/ftp/python/%(version)s/Python-%(version)s.tgz'%d)   
  File "/home/user/vagrant/dev/pims-dev/src/collective.hostout/collective/hostout/fabfile.py", line 894, in get_url
    proxy = api.env.hostout.socks_proxy
  File "/home/user/vagrant/dev/pims-dev/src/collective.hostout/collective/hostout/hostout.py", line 325, in socks_proxy
    self.socks_server = SocksProxy(transport, ('127.0.0.1', 7000))
NameError: global name 'SocksProxy' is not defined

In fabfile.py, bootstrap_users() uses fabric.contrib.files.append() incorrectly

I have to confess, I might have missed something since I don't actually use this myself, but I stole the code for something else and got odd results.

def bootstrap_users():
    """ create users if needed """
    ...
        append(key, '~%s/.ssh/authorized_keys' % owner, use_sudo=True)

The signature of the function is def append(filename, text, use_sudo=False, partial=True, escape=True) but you're appending the text to the file, instead of vice-versa, so my usage of the code resulted in:

[otndc@kil-otndev-1:22000] run: /bin/bash -l -c "PATH=\$PATH:\"/appl/Plone\" echo '~otndc/.ssh/authorized_keys' >> ssh-rsa ...== hostout@hostout"

Somewhere, I guess, I have a file named 'ssh-rsa'

Still too much sudo

Why is a hostout deployment I've used for months with 1.0a3 now running "bootstrap_users" when I try to use 1.0a5? How can I stop it?
I don't have or need sudo access, my users exist, I just need to get on with the "deploy".

Is this project active?

We are looking for a tool to use in combination with our buildout recipes to deploy our python apps. Is this project active, are there anybody using it? Any links that I would review about user experiences?

Modifying deployment configs per-host

Under 1.0a3, I created a hostout configuration for deploying to 8 hosts (actually, most are on the same physical machine, but they don't have to be), which involved a buildout.cfg:

...
[hostout]
host   = xxx.ca
user   = plone
path   = /home/plone4/plone-${:_buildout_section_name_}
parts  =  
    filestorage
    zeoserver
    instance
    zopepy
    crontab_zeopack

buildout = deploy.cfg
pre-commands  = /home/plone4/bin/supervisorctl stop  ${:_buildout_section_name_}-plone     
post-commands = /home/plone4/bin/supervisorctl start ${:_buildout_section_name_}-plone     
buildout-cache = /home/plone4
effective-user = plone
buildout-user  = plone
user           = derek
identity-file  = /home/derek/.ssh/id_rsa

[controller]
<=hostout
recipe = collective.hostout
parts  = supervisor
extends= pointerstop.recipe.hostout 
buildout = 
path   = /home/plone4
pre-commands  = 
post-commands = 
include = 
    ApacheRewrite.conf

[cmb]
recipe = collective.hostout
<= hostout
zeo-storage = CMB
...

and for each of the "hosts" (e.g., 'cmb' above) I could add parts to the corresponding .cfg file (cmb.cfg):

[buildout]
newest = true
eggs-directory = /home/plone4/eggs
versions = versions
develop = 
parts = filestorage zeoserver instance zopepy
extends = hostoutversions.cfg cmb_base.cfg
download-cache = /home/plone4/downloads

[versions]
...

[filestorage]
parts = cmb
recipe = collective.recipe.filestorage

[filestorage_cmb]
zeo-storage = CMB

[instance]
port-base = 0

[zeoserver]
zeo-address = 12005

Here, the [buildout] and [versions] sections are controlled by the collective.hostout configuration, but the remainder are my specific configurations for that particular host. In fact, all of these (except port-base in [instance] are updated by the pointerstop.recipe.hostout plugin in the [controller] part. Additionally, the plugin writes the ApacheRewrite.Conf file (which puts the appropriate proxy rules for each host into the Apache configuration), and the supervisord config file allowing one supervisord daemon to monitor all the Plones. The "port-base" setting is what determines the variant part of what gets writtent to those two config files.

In 1.0a5, you no longer write "cmb.cfg". Instead, you write a new cmb-VERSION.cfg every time, and this has none of my configuration.

The simplest solution to get back my previous functionality is to make "cmb-VERSION.cfg" always default to the contents of "cmb.cfg" (which of course may not exist, and is therefore the same as using an empty file), but if there's a way to have different configs deployed for each host without further modification of collective.hostout, I'd be even happier.

Deploy fails to guess ubuntu

aclark@Alex-Clarks-MacBook-Pro:~/Developer/CLIENT/ > bin/hostout host deploy
Hostout: Running command 'deploy' from 'collective.hostout.fabfile'
Hostout: Running command 'predeploy' from 'collective.hostout.fabfile'
Hostout: Running command 'bootstrap' from 'collective.hostout.fabfile'
Hostout: Running command 'detecthostos' from 'collective.hostout.fabfile'
[[email protected]:22] run: /bin/bash -l -c "PATH=$PATH:"/srv/plone" ([ -e /etc/SuSE-release ] && echo SuSE) || ([ -e /etc/redhat-release ] && echo redhat) || ([ -e /etc/fedora-release ] && echo fedora) || (lsb_release -is) || ([ -e /etc/slackware-version ] && echo slackware)"
[[email protected]:22] out: /bin/bash: -c: line 0: syntax error near unexpected token (' [[email protected]:22] out: /bin/bash: -c: line 0:PATH=$PATH:"/srv/plone" ([ -e /etc/SuSE-release ] && echo SuSE) || ([ -e /etc/redhat-release ] && echo redhat) || ([ -e /etc/fedora-release ] && echo fedora) || (lsb_release -is) || ([ -e /etc/slackware[[email protected]:22] out:
[[email protected]:22] out:

Fatal error: run() encountered an error (return code 1) while executing '([ -e /etc/SuSE-release ] && echo SuSE) || ([ -e /etc/redhat-release ] && echo redhat) || ([ -e /etc/fedora-release ] && echo fedora) || (lsb_release -is) || ([ -e /etc/slackware-version ] && echo slackware)'

None

Aborting.

Exception in thread Thread-1 (most likely raised during interpreter shutdown)

out: Generated script '/home/user/test/bin/buildout'.

Return value: None
Exception in thread Thread-1 (most likely raised during interpreter shutdown):
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 551, in __bootstrap_inner
File "/usr/local/lib/python2.7/dist-packages/ssh-1.7.14-py2.7.egg/ssh/transport.py", line 1602, in run
<type 'exceptions.AttributeError'>: 'NoneType' object has no attribute 'error'

enable use direct from fabric

rename package to hostout.
enable use by creating a Fabfile, installing fabric and then running commands such as "fab -Hmyhost hostout.deploy".
Hostout configuration would be via a dict in Fabfile or by loading a hostout.cfg in Fabfile.

Hostout does things as root when it shouldn't

I've just bin fiddling with bin/hostout run some_commands

And it works great!

But I notice it seems to have run itself as root when there is no need ; and this seems to upset the delicate non-root balance I had in my staging deployment :-)

Is this a bug or do I just need to configure differently?

Thanks for all the great work so far with hostout!

`runescalatable` loses our place

During bootstrap_python, hostout fails. The log at this point:

Hostout: Running command 'sudo' from 'collective.hostout.fabfile'
[[email protected]:2222] sudo: sudo -S -p 'sudo password:'  /bin/bash -l -c "cd /path/to/instance/ && PATH=\$PATH:\"/path/to/instance/\" make altinstall"
[[email protected]:2222] out: make: *** No rule to make target `altinstall'.  Stop.

This is because the correct place to run make is /tmp/Python-X.Y/
The error is triggered by the runescalatable command at (source link).

Besides that, I think it should be just make install, as we want the created ./bin/python to be the one we compiled.

Current Ubuntu fails to compile Python: make: *** No rule to make target `altinstall'.

collective/hostout/fabfile.py has:

        api.run('./configure BASECFLAGS=-U_FORTIFY_SOURCE --prefix=%(prefix)s  --enable-unicode=ucs4 --with-threads --with-readline --with-dbm --with-zlib --with-ssl --with-bz2 %(extra_args)s' % locals())
        api.run('make clean')
        api.run('make')
        runescalatable('make altinstall')

But it fails at the last step:

  make: *** No rule to make target `altinstall'.  Stop.

What does the altinstall target do? Can it be changed to just install?

Allow for a dry-run

It would be usefull to have a dry-run option that just tells what commands are about to be run on the remote machine
In order to see what commands are actually run, I now have to dig into various packages.

ssh key unsuccessfully transfered

The ssh key for root user is successfully transferred, but for the buildout-user the authorized_keys file is empty. Each time i bootstrap a vps i have to ssh in and copy /root/.ssh/authorized_keys to ~buildoutuser/.ssh/authorized_keys

ssh key many times copied to the authorized_keys

fabric.contrib.files.contains not work properly while check existance ssh key in the ~/.ssh/authorized_keys

Please look at the bug report fabric/fabric#728

Fabric use quotes for build egrep command. Egrep not properly process relative path with tilda
"~myuser/.ssh/authorized_keys"

Because of this, many times the ssh key is copied to the authorized_keys

Can you build path another way?

Fix assumption that packagename==filename

Somewhere hostout is making the assumption that a package will be in a directory or zipfile with a name that corresponds to its package name as defined in setup.py

This results in a package where these don't match not being found during deployment.

Fatal Error running hostout deploy on FreeBSD

Remote hostout deploy:

[[email protected]:22] out: *************** /PICKED VERSIONS ***************


Fatal error: run() received nonzero return code 1 while executing!

Requested: bin/buildout -c kaeru-jail-hlEgw-q8m8gukVH-iGVpJg.cfg 
Executed: /usr/local/bin/bash -c "cd /var/plone/cisindia-devel/ && PATH=\$PATH:\"/var/plone/cisindia-devel/\" bin/buildout -c kaeru-jail-hlEgw-q8m8gukVH-iGVpJg.cfg "

None


Aborting.

Running on the server exactly same command with same user yields no errors.

detecthostos fails (1.0a5)

Hostout: Running command 'detecthostos' from 'collective.hostout.fabfile'
[[email protected]:22] run: /bin/bash -l -c "PATH=\$PATH:\"/home/plone4/plone-cmb\" ([ -e /etc/SuSE-release ] && echo SuSE) || ([ -e /etc/redhat-release ] && echo redhat) || ([ -e /etc/fedora-release ] && echo fedora) || (lsb_release -is) || ([ -e /etc/slackware-version ] && echo slackware)"
[[email protected]:22] out: /bin/bash: -c: line 0: syntax error near unexpected token `('
[[email protected]:22] out: /bin/bash: -c: line 0: `PATH=$PATH:"/home/plone4/plone-cmb" ([ -e /etc/SuSE-release ] && echo SuSE) || ([ -e /et[[email protected]:22] out: /etc/fedora-release ] && echo fedora) || (lsb_release -is) || ([ -e /etc/slackware-version ] && echo slackware)'
[[email protected]:22] out: 

Fatal error: run() encountered an error (return code 2) while executing '([ -e /etc/SuSE-release ] && echo SuSE) || ([ -e /etc/redhat-release ] && echo redhat) || ([ -e /etc/fedora-release ] && echo fedora) || (lsb_release -is) || ([ -e /etc/slackware-version ] && echo slackware)'

Since this fails equally on my CentOS system and my Ubuntu system, I'm suspecting that you're actually setting the "shell" parameter mentioned in the changelog for 1.0a4, but it's undocumented and I don't know what to put there.

If I run the command directly on the deployment system, and remove the PATH setting (PATH=$PATH:"/home/plone4/plone-cmb"), it works.

Traceback on python-version

With no python-version set, I get:

aclark@Alex-Clarks-MacBook-Pro:~/Developer// > bin/hostout host bootstrap
Hostout: Running command 'bootstrap' from 'collective.hostout.fabfile'
Traceback (most recent call last):
File "bin/hostout", line 18, in
collective.hostout.hostout.main('/Users/aclark/Developer//parts/hostout/hostout.cfg',sys.argv[1:])
File "/Users/aclark/Downloads/eggs-directory/collective.hostout-1.0a5-py2.6.egg/collective/hostout/hostout.py", line 652, in main
res = hostout.runcommand(cmd, _cmdargs)
File "/Users/aclark/Downloads/eggs-directory/collective.hostout-1.0a5-py2.6.egg/collective/hostout/hostout.py", line 384, in runcommand
res = superfun(funcs, *cmdargs, *_vargs)
File "/Users/aclark/Downloads/eggs-directory/collective.hostout-1.0a5-py2.6.egg/collective/hostout/hostout.py", line 379, in superfun
res = func(_cmdargs, *_vargs)
File "/Users/aclark/Downloads/eggs-directory/collective.hostout-1.0a5-py2.6.egg/collective/hostout/fabfile.py", line 223, in bootstrap
version = api.env['python-version']
KeyError: 'python-version'

Integrate `collective.eggproxy`

collective.eggproxy is a good fit for hostout. As long as the deployment buildout has executed successfully on the source machine, the target machine does not need to re-fetch any eggs caught by the proxy.

To integrate it,

  • Allow the user to optionally specify an existing eggproxy and/or eggs_directory.
  • If we don't have an eggproxy, build one in ./${eggproxy-dir}/.
  • Unless we're using an existing proxy, start the eggproxy during ./bin/hostout .... deploy.
  • Create a tunnel from a port on the target to the eggproxy port.
  • Set index and allow-hosts to direct downloads to this port.
  • Unless we're using an existing proxy, stop the eggproxy after ./bin/hostout .... deploy.

This does not handle downloads via zc.recipe.cmmi, that's for another issue.

deploy can't find remote 'zc.buildout' package

Hi,
I'm testing c.hostout using the example of the minimal WSGI app (which is working locally). I'm doing this into a dedicated virtualenv.

When I run deploy it fails complaining that """pkg_resources.Requirement.parse('zc.buildout')""" on the remote machine (a virtualbox vm) returns None

Here's the full traceback:

bin/hostout host1 deploy

Hostout: Running command 'deploy' from 'collective.hostout.fabfile'
Hostout: Running command 'predeploy' from 'collective.hostout.fabfile'
[[email protected]:22] run: /bin/bash -l -c "PATH=$PATH:"/home/simahawk/hostout_demo" [ -e /home/simahawk/hostout_demo/bin/buildout ]"
[[email protected]:22] Login password:
Hostout: Running command 'precommands' from 'collective.hostout.fabfile'
Hostout: Running command 'uploadeggs' from 'collective.hostout.fabfile'
[[email protected]:22] run: /bin/bash -l -c "PATH=$PATH:"/home/simahawk/hostout_demo" ls /home/simahawk/buildout-cache/downloads/dist"
[[email protected]:22] out: deploy_LUVN1OSK7xBWNjIHKZXF6g.tgz hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA.zip

Hostout: Preparing eggs for transport
Hostout: Develop egg /home/simahawk/dev/hostout/example/src/hellowsgi changed. Releasing with hash 8MiseM54u-pckYopRbj5PA
running clean
running egg_info
writing hellowsgi.egg-info/PKG-INFO
writing top-level names to hellowsgi.egg-info/top_level.txt
writing dependency_links to hellowsgi.egg-info/dependency_links.txt
writing entry points to hellowsgi.egg-info/entry_points.txt
reading manifest file 'hellowsgi.egg-info/SOURCES.txt'
writing manifest file 'hellowsgi.egg-info/SOURCES.txt'
running sdist
warning: sdist: standard file not found: should have one of README, README.txt
warning: sdist: missing required meta-data: url
warning: sdist: missing meta-data: either (author and author_email) or (maintainer and maintainer_email) must be supplied
creating hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA
creating hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi
creating hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi.egg-info
making hard links in hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA...
hard linking setup.cfg -> hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA
hard linking setup.py -> hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA
hard linking hellowsgi/init.py -> hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi
hard linking hellowsgi.egg-info/PKG-INFO -> hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi.egg-info
hard linking hellowsgi.egg-info/SOURCES.txt -> hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi.egg-info
hard linking hellowsgi.egg-info/dependency_links.txt -> hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi.egg-info
hard linking hellowsgi.egg-info/entry_points.txt -> hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi.egg-info
hard linking hellowsgi.egg-info/not-zip-safe -> hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi.egg-info
hard linking hellowsgi.egg-info/top_level.txt -> hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi.egg-info
copying setup.cfg -> hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA
Writing hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/setup.cfg
creating '/tmp/tmpGLKdb3/hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA.zip' and adding 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA' to it
adding 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/setup.cfg'
adding 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/PKG-INFO'
adding 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/setup.py'
adding 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi/init.py'
adding 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi.egg-info/not-zip-safe'
adding 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi.egg-info/entry_points.txt'
adding 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi.egg-info/top_level.txt'
adding 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi.egg-info/PKG-INFO'
adding 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi.egg-info/dependency_links.txt'
adding 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi.egg-info/SOURCES.txt'
creating '/tmp/tmpGLKdb3/hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA.zip' and adding 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA' to it
adding 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/setup.cfg'
adding 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/PKG-INFO'
adding 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/setup.py'
adding 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi/init.py'
adding 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi.egg-info/not-zip-safe'
adding 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi.egg-info/entry_points.txt'
adding 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi.egg-info/top_level.txt'
adding 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi.egg-info/PKG-INFO'
adding 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi.egg-info/dependency_links.txt'
adding 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA/hellowsgi.egg-info/SOURCES.txt'
removing 'hellowsgi-0.0dev-8MiseM54u-pckYopRbj5PA' (and everything under it)
Hostout: Eggs to transport:
hellowsgi = 0.0dev-8MiseM54u-pckYopRbj5PA
[[email protected]:22] put: /tmp/tmpsioujf -> /home/simahawk/hostout_demo/pinned.cfg
Hostout: Running command 'uploadbuildout' from 'collective.hostout.fabfile'
[[email protected]:22] run: /bin/bash -l -c "PATH=$PATH:"/home/simahawk/hostout_demo" test -f /home/simahawk/buildout-cache/downloads/dist/deploy_LUVN1OSK7xBWNjIHKZXF6g.tgz || echo 'None'"
[[email protected]:22] run: /bin/bash -l -c "cd /home/simahawk/hostout_demo && PATH=$PATH:"/home/simahawk/hostout_demo" tar -p -xvf /home/simahawk/buildout-cache/downloads/dist/deploy_LUVN1OSK7xBWNjIHKZXF6g.tgz"
[[email protected]:22] out: base.cfg
[[email protected]:22] out: prod.cfg

Hostout: Running command 'buildout' from 'collective.hostout.fabfile'
[[email protected]:22] put: /tmp/tmpQjdhRM -> /home/simahawk/hostout_demo/host1-LUVN1OSK7xBWNjIHKZXF6g.cfg
[[email protected]:22] run: /bin/bash -l -c "cd /home/simahawk/hostout_demo && PATH=$PATH:"/home/simahawk/hostout_demo" test -e "$(echo pinned.cfg)""
[[email protected]:22] run: /bin/bash -l -c "cd /home/simahawk/hostout_demo && PATH=$PATH:"/home/simahawk/hostout_demo" bin/buildout -c host1-LUVN1OSK7xBWNjIHKZXF6g.cfg "
[[email protected]:22] out: Traceback (most recent call last):
[[email protected]:22] out: File "bin/buildout", line 17, in
[[email protected]:22] out: import zc.buildout.buildout
[[email protected]:22] out: File "/usr/local/lib/python2.6/dist-packages/zc/buildout/buildout.py", line 39, in
[[email protected]:22] out: import zc.buildout.download
[[email protected]:22] out: File "/usr/local/lib/python2.6/dist-packages/zc/buildout/download.py", line 20, in
[[email protected]:22] out: from zc.buildout.easy_install import realpath
[[email protected]:22] out: File "/usr/local/lib/python2.6/dist-packages/zc/buildout/easy_install.py", line 81, in
[[email protected]:22] out: pkg_resources.Requirement.parse('zc.buildout')).location
[[email protected]:22] out: AttributeError: 'NoneType' object has no attribute 'location'

Fatal error: run() received nonzero return code 1 while executing!

Requested: bin/buildout -c host1-LUVN1OSK7xBWNjIHKZXF6g.cfg
Executed: /bin/bash -l -c "cd /home/simahawk/hostout_demo && PATH=$PATH:"/home/simahawk/hostout_demo" bin/buildout -c host1-LUVN1OSK7xBWNjIHKZXF6g.cfg "

None

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.