espenak / awsfabrictasks Goto Github PK
View Code? Open in Web Editor NEWLicense: Other
License: Other
ec2.api.ec2_rsync
should be named ec2_rsync_upload
, since it only supports syncing data to an instance (no way of specifying the order of remote and local hosts).
I am trying to run a task on multiple hosts with the same tag [Name: Magento] and am getting the following error:
awsfabrictasks.ec2.api.MultipleInstancesWithSameNameError: More than one ec2 reservations with tag:Name=Magento
Basically, what I want to do is manage all of my Magento servers with a single command. Update the OS, pull the latest code from Git, etc... I'm trying to do this by using AWS tags but Fabric is not playing nice with me.
I mainline Ruby and although my Python is really weak I think Fabric is the right tool for this job. Am I missing something? Is is possible to tell AWSFab to run the same command on all my hosts? Is there a better way to do this?
Hi, I can't find a way to access the current instance. I need to create a volume in the same availability zone as the instance (using boto), but need to know the availability zone of the instance it will be attached to (it may be variable). I guess I'm trying to get the Ec2InstanceWrapper or underlying EC2Connection.
For some reason, env.host_string is empty (in fact, the entire env seems to be at it's default values), so I can't look up the instance by it's DNS name either.
@task
@ec2instance(tags={'Group': 'master'})
def build():
pprint(env)
#vol = conn.create_volume(VOLUME_SIZE, REGION)
env
looks like this:
'ec2ids': None,
'ec2names': None,
'ec2tags': '',
...
'host': '',
'host_string': '',
'hosts': [],
We are using boto 2.6.0 and I discovered the ec2_list_instances call doesn't work anymore. Works fine with boto 2.5.2.
(pyenv)$ awsfab --awsfab-settings=awsfab_dev ec2_list_instances
id: <...>
owner_id: <...>
groups:
- <...>
instances:
- id: i-xxxxxxxxx (Name: <...>)
Traceback (most recent call last):
File "/<...>/pyenv/lib/python2.7/site-packages/fabric/main.py", line 717, in main
*args, **kwargs
File "/<...>/pyenv/lib/python2.7/site-packages/fabric/tasks.py", line 322, in execute
results['<local-only>'] = task.run(*args, **new_kwargs)
File "/<...>/pyenv/lib/python2.7/site-packages/fabric/tasks.py", line 112, in run
return self.wrapped(*args, **kwargs)
File "/<...>/pyenv/lib/python2.7/site-packages/awsfabrictasks/ec2/tasks.py", line 214, in ec2_list_instances
print_ec2_instance(instance, full=full, indentspaces=11)
File "/<...>/pyenv/lib/python2.7/site-packages/awsfabrictasks/ec2/api.py", line 423, in print_ec2_instance
value = instance.__dict__[attrname]
KeyError: 'state'
Looking at the changelog for boto 2.6.0 (boto/boto@2.5.2...2.6.0) - specifically https://github.com/boto/boto/commits/develop/boto/ec2/instance.py it appears that the state attribute was removed and a new InstanceState class was created to hold the EC2 instance state. So "state" is no longer in the attributes. It is now _state. Placement was also changed and is now _placement (and an InstancePlacement class introduced).
in awsfabrictasks/ec2/api.py - print_ec2_instance:
if full:
attrnames = sorted(instance.__dict__.keys())
else:
attrnames = ['state', 'instance_type', 'ip_address', 'public_dns_name',
'private_dns_name', 'private_ip_address',
'key_name', 'tags', 'placement']
If you change state -> _state and placement -> _placement - the code works, but you have to remove the:
if attrname.startswith('_'):
continue
Then you get:
id: r-xxxxxxx
owner_id: xxxxxxxxxxxx
groups:
- MyGroup (id:sg-xxxxxxxx)
instances:
- id: i-xxxxxxxx (Name: my_server)
_state: running(16)
instance_type: t1.micro
ip_address: xx.xx.xx.xx
public_dns_name: ec2-xx-xx-xx-xx.compute-1.amazonaws.com
private_dns_name: ip-xx-xxx-xxx-xxx.ec2.internal
private_ip_address: xx-xxx-xxx-xxx
key_name: my_key
tags: { u'Name': u'my_server'}
_placement: us-east-1a
Of course this won't work with older versions of boto - and doesn't really take advantage of the new data in boto 2.6.0 re: previous states. Anyway, I just wanted to report the issue. Not sure how best to handle the solution in a way to support both boto 2.5.2 and boto 2.6.0.
Awesome package, super useful!
With the utility installed, I get the following error when I run my standard fabric tasks:
$ fab pull
/Users/rbernabe/virtualenvs/puppet/lib/python2.7/site-packages/awsfabrictasks/conf.py:35: UserWarning: Could not find the env.awsfab_settings_module. Make sure you run run awsfab tasks using the ``awsfab`` command (not fab)?
warn('Could not find the env.awsfab_settings_module. Make sure you run run awsfab tasks using the ``awsfab`` command (not fab)?')
[puppet-east.xxxx.com] Executing task 'pull'
[puppet-east.xxxx.com] run: git pull
[puppet-east.xxxx.com] out: Already up-to-date.
[puppet-east.xxxx.com] out:
Done.
Disconnecting from puppet-east.xxxx.com... done.
Should I be using awsfab
even for my non aws-specific tasks?
If one is managing instances across multiple AWS accounts (with different credentials, obviously), what is the recommended way to set this up?
The motivation is clearly stated in #2. We should be able to use:
with instancewrapper.instance_settings():
print_uname()
As a shortcut for:
ssh_uri = instancewrapper.get_ssh_uri()
key_filename = instancewrapper.get_ssh_key_filename()
with settings(key_filename=key_filename, host_string=ssh_uri):
# Run tasks here just as if you invoked ``awsfab -E myname <task>``
print_uname()
Hi there, I'm trying to solve the following problem with awsfabrictasks: I would like to first start a certain number of instances, and then execute tasks on them.
I wrote a small task setup
that creates a list of Ec2LaunchInstance
s, starts them using Ec2LaunchInstance.run_many_instances
, and calls Ec2InstanceWrapper(instance.instance).add_instance_to_env()
on each of them.
The idea was to then call awsfabric setup task1 task2
, where task1
and task2
would then be executed on the instances started from setup
. Unfortunately add_instance_to_env()
does not seem to actually add the instance to env
, and so this approach doesn't work.
Now I was wondering
env
feels a little hacky.add_instance_to_env()
not result in that instance being added to env
?I'm loving this python library, but there is one issue... Is it possible to specify an IAM role when launching an EC2 instance? Is there a way to specify this in the EC2_LAUNCH_CONFIGS?
awsfab monkey patches get_hosts which has been renamed in the latest Fabric.
Hi
My machines do not have public addresses within my VPC and it seems to be a requirement of this package I think.
If this is the case can I suggest "def get_ssh_uri" in ec2 falls back to "private_dns_name"?
I also found my VPC settings were such that private DNS was not resolving so it maybe better if it altogether was able to fall back to IP address.
Maybe I'm missing something, I've not been using Fabric that long.
Cheers
It looks like we can't create an ec2 instance to a specific VPC instance
I will submit a patch about that in the currents weeks
Hi
Trying to lunch t2.micro but failing with:
Response><Errors><Error><Code>VPCResourceNotSpecified</Code><Message>The specified instance type can only be used in a VPC. A subnet ID or network interface ID is required to carry out the request.</Message></Error></Errors><RequestID>680064e3-9618-4a40-9b90-2d80487bbd74</RequestID></Response>
config file:
EC2_LAUNCH_CONFIGS = {
't2.micro': {
'description': 'Ubuntu trusty64.',
# Ami ID (E.g.: ami-fb665f8f)
'ami': 'ami-<id>',
# One of: m1.small, m1.large, m1.xlarge, c1.medium, c1.xlarge, m2.xlarge, m2.2xlarge, m2.4xlarge, cc1.4xlarge, t1.micro
'instance_type': 't2.micro',
# List of security groups
'security_groups': ['default'],
'subnet' : 'subnet-<id>',
# Use the ``list_regions`` task to see all available regions
'region': DEFAULT_REGION,
# The name of the key pair to use for instances (See http://console.aws.amazon.com -> EC2 -> Key Pairs)
'key_name': 'raidkey',
# The availability zone in which to launch the instances. This is
# automatically prefixed by ``region``.
'availability_zone': 'a',
#Subnet id
# Tags to add to the instances. You can use the ``ec2_*_tag`` tasks or
# the management interface to manage tags. Special tags:
# - Name: Should not be in this dict. It is specified when launching
# an instance (needs to be unique for each instance).
# - awsfab-ssh-user: The ``awsfab`` tasks use this user to log into your instance.
'tags': {
'awsfab-ssh-user': 'ubuntu',
},
'user_data': user_data_example
}
}
When I decorate a task with @ec2instance
, it prompts me for a password for the current local user of my local host (instead of using the awsfab-ssh-user
user and the default SSH key). I conducted a little test to compare the behaviour of a task decorated with @ec2instance(tags=...)
started without --ec2tags
and a similar task, not decorated with @ec2instance
, but started with --ec2tags=...
:
# fabfile.py
from fabric.api import task, run
from awsfabrictasks.decorators import ec2instance
@task
@ec2instance(tags={'Environment':'benchmark'})
def whoami_decorated():
run('whoami')
@task
def whoami_not_decorated():
run('whoami')
I would expect awsfab whoami_decorated
and awsfab --ec2tags=Environment=benchmark whoami_not_decorated
to produce the same results. Alas, awsfab whoami_decorated
fails:
leo@localhost $ awsfab whoami_decorated
[ec2-IP1.us-west-2.compute.amazonaws.com] Executing task 'whoami_decorated'
[ec2-IP1.us-west-2.compute.amazonaws.com] run: whoami
[ec2-IP1.us-west-2.compute.amazonaws.com] Login password for 'leo': ^C
but awsfab --ec2tags=Environment=benchmark whoami_not_decorated
succeeds:
leo@localhost $ awsfab --ec2tags=Environment=benchmark whoami_not_decorated
[[email protected]] Executing task 'whoami_not_decorated'
[[email protected]] run: whoami
[[email protected]] out: root
[[email protected]] Executing task 'whoami_not_decorated'
[[email protected]] run: whoami
[[email protected]] out: root
[[email protected]] Executing task 'whoami_not_decorated'
[[email protected]] run: whoami
[[email protected]] out: root
Done.
Disconnecting from [email protected]... done.
Disconnecting from [email protected]... done.
Disconnecting from [email protected]... done.
Now I'm wondering
whoami_decorated
use my SSH key and the awsfab-ssh-user
(set to root
)?[This is about @ec2instance
just like issue #15, but it is concerned with how it works, not when it is executed, so I decided to open a new issue for it.]
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.