Git Product home page Git Product logo

getmoto / moto Goto Github PK

View Code? Open in Web Editor NEW
7.4K 7.4K 2.0K 38.78 MB

A library that allows you to easily mock out tests based on AWS infrastructure.

Home Page: http://docs.getmoto.org/en/latest/

License: Apache License 2.0

Makefile 0.01% Python 99.67% Java 0.06% JavaScript 0.01% Ruby 0.01% HTML 0.03% Shell 0.03% Dockerfile 0.01% Scala 0.01% Jinja 0.01% C# 0.02% HCL 0.05% ANTLR 0.11%
aws boto ec2 s3

moto's People

Contributors

acsbendi avatar asherf avatar bblommers avatar bpandola avatar chrishenry avatar cm-iwata avatar dependabot[bot] avatar dreadpirateshawn avatar github-actions[bot] avatar gmcrocetti avatar gruebel avatar hltbra avatar jackdanger avatar jbbarth avatar joekiller avatar kbalk avatar kouk avatar macnev2013 avatar mfulleratlassian avatar mikegrima avatar pinzon avatar rafcio19 avatar sleepdeprecation avatar spulec avatar terrycain avatar toshitanian avatar tsugumi-sys avatar usmangani1 avatar viren-nadkarni avatar william-richard avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

moto's Issues

dynamodb2 support

Hello,

This is a question more than an issue. moto looks great, but I would really like dynamodb2 support (my experiments suggest that dynamodb2 is not supported, but correct me if I'm wrong). I'm planning to fork and add support where I need it for now, then chip away at full support. I'd like to know if you have any suggestions or guidance for how I should go about making the modifications. My initial plan is to copy the dynamodb package to dynamodb2 to mirror the boto structure. This will obviously mean that only one of dynamodb and dynamodb2 can be mocked at the same time, and it will need a separate decorator/context manager. This seems the cleanest to me. Any and all advice would be most welcome, and thanks for the great library.

-Jonathan

Select port for moto_server?

Looking at the code and maybe I missed it, but is there a way to select a different port for the moto_server?

listing of keys is inconsistent with boto's implementation

The following code demonstrates that the listing of keys is inconsistent with boto's implementation.

Here is the output:

real s3
[u'toplevel/x/key', u'toplevel/x/y/key', u'toplevel/x/y/z/key', u'toplevel/y.key1', u'toplevel/y.key2', u'toplevel/y.key3']
[u'toplevel/y.key1', u'toplevel/y.key2', u'toplevel/y.key3', u'toplevel/x/']

mocked s3
[u'toplevel/y.key1', u'toplevel/y.key2', u'toplevel/y.key3', u'toplevel/x']
[u'toplevel/y.key1', u'toplevel/y.key2', u'toplevel/y.key3', u'toplevel/x']

The test code:

import boto
from boto.s3.key import Key
from moto import mock_s3

BUCKET = 'dilshodtest1bucket'
prefix = 'toplevel/'

def main():

    conn = boto.connect_s3()
    conn.create_bucket(BUCKET)
    bucket = conn.get_bucket(BUCKET)

    def store(name):
        k = Key(bucket, prefix + name)
        k.set_contents_from_string('somedata')

    names = ['x/key', 'y.key1', 'y.key2', 'y.key3', 'x/y/key', 'x/y/z/key']

    for name in names:
        store(name)

    delimiter = None

    keys = [x.name for x in bucket.list(prefix, delimiter)]

    print keys

    delimiter = '/'
    keys = [x.name for x in bucket.list(prefix, delimiter)]

    print keys


if __name__ == '__main__':

    print 'real s3'

    main()

    print 'mocked s3'

    mock = mock_s3()
    mock.start()
    main()
    mock.stop()

Support transition of ec2 instance from pending to running

When I launch an instance, I try to wait until it has been launched:

    def wait_for_state(self, state):
        found = False
        retries = 0
        while not found and retries < 15:
            self.ec2.update()
            if self.ec2.state == state:
                found = True
                break
            else:
                retries = retries + 1 
                logging.info("Waiting 5 seconds for host to change from %s to %s..." % (self.ec2.state, state))
                time.sleep(5)

        if not found:
            raise Exception("Instance did not change state to %s " % state)

This, however, never ends with moto, as the EC2 status never changes. In real EC2 it starts with pending and later it becomes "running".

S3 mock makes SimpleDB not work

Apart from using S3 and DynamoDB, I am also using SimpleDB. During the unit tests I have enabled mocks for S3 and DynamoDB - but that made SimpleDB stop working.

Here is a snippet to reproduce the problem:

import boto
from moto import mock_s3

mock = mock_s3()
mock.start()

sdb = boto.connect_sdb()
domain = sdb.get_domain('domain_name')

The call to get_domain makes boto fail with a BadStatusLine exception, but I am unsure why because moto shouldn't affect SimpleDB (AFAIK there is no support for it yet).

Moto Server

Using the backend configurations, setup a moto-server command for non python usage

S3 Bucket.get_key(<NAME>).size is equal to 0

With a bucket "bucket" and a key with name "name", the following is equal to 0 regardless of the contents of the key:

bucket.get_key(name).size

Tested with the following:

@mock_s3
def test_get_key_size():
    conn = boto.connect_s3('the_key', 'the_secret')
    b = conn.create_bucket("test")
    k = Key(b, "hi")
    k.set_contents_from_string("hello")
    k.size.should.equal(5)
    b.get_key("hi")
    b.get_key("hi").size.should.equal(5)

The test fails on the last assertion (it gets 0 while expecting 5).

Demonstrate how to mock describe_instances for existing instances

Please can you demonstrate how to mock a call to get_all_instances() so that it will return a preconfigured set of instances. Here's the code I want to test:

conn = boto.ec2.connect_to_region(
        self.region, aws_access_key_id=self.access_key,
        aws_secret_access_key=self.secret_key)
reservations = conn.get_all_instances()

How can I mock the response to the 'reservations' variable?

Thanks

Creating key with empty contents fails

I'm using empty files as flags for triggering a compaction process, it would appear that moto doesn't support setting empty contents :)

There's quite a large callstack and I'm not sure what's going on.

============================= test session starts ==============================
platform darwin -- Python 2.7.2 -- pytest-2.3.4
collected 9 items

tests/test_s3.py F

=================================== FAILURES ===================================
__________________________________ test_touch __________________________________

args = (), kwargs = {}

    def wrapper(*args, **kwargs):
        with self:
>           result = func(*args, **kwargs)

../../../../.virtualenvs/echo-compactor/src/moto/moto/core/models.py:47: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

    @mock_s3
    def test_touch():
        bucket = s3.get_bucket(create=True)

>       s3.touch(bucket, 'foo')

tests/test_s3.py:59: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

bucket = <Bucket: fireteam-test-lucian>, key_name = 'foo'

    def touch(bucket, key_name):
        key = bucket.new_key(key_name)
>       key.set_contents_from_string('')

compactor/s3.py:27: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <Key: fireteam-test-lucian,foo>, s = '', headers = None, replace = True
cb = None, num_cb = 10, policy = None, md5 = None, reduced_redundancy = False
encrypt_key = False

    def set_contents_from_string(self, s, headers=None, replace=True,
                                 cb=None, num_cb=10, policy=None, md5=None,
                                 reduced_redundancy=False,
                                 encrypt_key=False):
        """
            Store an object in S3 using the name of the Key object as the
            key in S3 and the string 's' as the contents.
            See set_contents_from_file method for details about the
            parameters.

            :type headers: dict
            :param headers: Additional headers to pass along with the
                request to AWS.

            :type replace: bool
            :param replace: If True, replaces the contents of the file if
                it already exists.

            :type cb: function
            :param cb: a callback function that will be called to report
                progress on the upload.  The callback should accept two
                integer parameters, the first representing the number of
                bytes that have been successfully transmitted to S3 and
                the second representing the size of the to be transmitted
                object.

            :type cb: int
            :param num_cb: (optional) If a callback is specified with the
                cb parameter this parameter determines the granularity of
                the callback by defining the maximum number of times the
                callback will be called during the file transfer.

            :type policy: :class:`boto.s3.acl.CannedACLStrings`
            :param policy: A canned ACL policy that will be applied to the
                new key in S3.

            :type md5: A tuple containing the hexdigest version of the MD5
                checksum of the file as the first element and the
                Base64-encoded version of the plain checksum as the second
                element.  This is the same format returned by the
                compute_md5 method.
            :param md5: If you need to compute the MD5 for any reason
                prior to upload, it's silly to have to do it twice so this
                param, if present, will be used as the MD5 values of the
                file.  Otherwise, the checksum will be computed.

            :type reduced_redundancy: bool
            :param reduced_redundancy: If True, this will set the storage
                class of the new Key to be REDUCED_REDUNDANCY. The Reduced
                Redundancy Storage (RRS) feature of S3, provides lower
                redundancy at lower storage cost.

            :type encrypt_key: bool
            :param encrypt_key: If True, the new copy of the object will
                be encrypted on the server-side by S3 and will be stored
                in an encrypted form while at rest in S3.
            """
        if isinstance(s, unicode):
            s = s.encode("utf-8")
        fp = StringIO.StringIO(s)
        r = self.set_contents_from_file(fp, headers, replace, cb, num_cb,
                                        policy, md5, reduced_redundancy,
>                                       encrypt_key=encrypt_key)

../../../../.virtualenvs/echo-compactor/lib/python2.7/site-packages/boto/s3/key.py:1253: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <Key: fireteam-test-lucian,foo>
fp = <StringIO.StringIO instance at 0x10d05ac68>, headers = {}, replace = True
cb = None, num_cb = 10, policy = None
md5 = ('d41d8cd98f00b204e9800998ecf8427e', '1B2M2Y8AsgTpgAmY7PhCfg==')
reduced_redundancy = False, query_args = None, encrypt_key = False, size = 0
rewind = False

    def set_contents_from_file(self, fp, headers=None, replace=True,
                               cb=None, num_cb=10, policy=None, md5=None,
                               reduced_redundancy=False, query_args=None,
                               encrypt_key=False, size=None, rewind=False):
        """
            Store an object in S3 using the name of the Key object as the
            key in S3 and the contents of the file pointed to by 'fp' as the
            contents. The data is read from 'fp' from its current position until
            'size' bytes have been read or EOF.

            :type fp: file
            :param fp: the file whose contents to upload

            :type headers: dict
            :param headers: Additional HTTP headers that will be sent with
                the PUT request.

            :type replace: bool
            :param replace: If this parameter is False, the method will
                first check to see if an object exists in the bucket with
                the same key.  If it does, it won't overwrite it.  The
                default value is True which will overwrite the object.

            :type cb: function
            :param cb: a callback function that will be called to report
                progress on the upload.  The callback should accept two
                integer parameters, the first representing the number of
                bytes that have been successfully transmitted to S3 and
                the second representing the size of the to be transmitted
                object.

            :type cb: int
            :param num_cb: (optional) If a callback is specified with the
                cb parameter this parameter determines the granularity of
                the callback by defining the maximum number of times the
                callback will be called during the file transfer.

            :type policy: :class:`boto.s3.acl.CannedACLStrings`
            :param policy: A canned ACL policy that will be applied to the
                new key in S3.

            :type md5: A tuple containing the hexdigest version of the MD5
                checksum of the file as the first element and the
                Base64-encoded version of the plain checksum as the second
                element.  This is the same format returned by the
                compute_md5 method.
            :param md5: If you need to compute the MD5 for any reason
                prior to upload, it's silly to have to do it twice so this
                param, if present, will be used as the MD5 values of the
                file.  Otherwise, the checksum will be computed.

            :type reduced_redundancy: bool
            :param reduced_redundancy: If True, this will set the storage
                class of the new Key to be REDUCED_REDUNDANCY. The Reduced
                Redundancy Storage (RRS) feature of S3, provides lower
                redundancy at lower storage cost.

            :type encrypt_key: bool
            :param encrypt_key: If True, the new copy of the object will
                be encrypted on the server-side by S3 and will be stored
                in an encrypted form while at rest in S3.

            :type size: int
            :param size: (optional) The Maximum number of bytes to read
                from the file pointer (fp). This is useful when uploading
                a file in multiple parts where you are splitting the file
                up into different ranges to be uploaded. If not specified,
                the default behaviour is to read all bytes from the file
                pointer. Less bytes may be available.

            :type rewind: bool
            :param rewind: (optional) If True, the file pointer (fp) will
                be rewound to the start before any bytes are read from
                it. The default behaviour is False which reads from the
                current position of the file pointer (fp).

            :rtype: int
            :return: The number of bytes written to the key.
            """
        provider = self.bucket.connection.provider
        headers = headers or {}
        if policy:
            headers[provider.acl_header] = policy
        if encrypt_key:
            headers[provider.server_side_encryption_header] = 'AES256'

        if rewind:
            # caller requests reading from beginning of fp.
            fp.seek(0, os.SEEK_SET)
        else:
            # The following seek/tell/seek logic is intended
            # to detect applications using the older interface to
            # set_contents_from_file(), which automatically rewound the
            # file each time the Key was reused. This changed with commit
            # 14ee2d03f4665fe20d19a85286f78d39d924237e, to support uploads
            # split into multiple parts and uploaded in parallel, and at
            # the time of that commit this check was added because otherwise
            # older programs would get a success status and upload an empty
            # object. Unfortuantely, it's very inefficient for fp's implemented
            # by KeyFile (used, for example, by gsutil when copying between
            # providers). So, we skip the check for the KeyFile case.
            # TODO: At some point consider removing this seek/tell/seek
            # logic, after enough time has passed that it's unlikely any
            # programs remain that assume the older auto-rewind interface.
            if not isinstance(fp, KeyFile):
                spos = fp.tell()
                fp.seek(0, os.SEEK_END)
                if fp.tell() == spos:
                    fp.seek(0, os.SEEK_SET)
                    if fp.tell() != spos:
                        # Raise an exception as this is likely a programming
                        # error whereby there is data before the fp but nothing
                        # after it.
                        fp.seek(spos)
                        raise AttributeError('fp is at EOF. Use rewind option '
                                             'or seek() to data start.')
                # seek back to the correct position.
                fp.seek(spos)

        if reduced_redundancy:
            self.storage_class = 'REDUCED_REDUNDANCY'
            if provider.storage_class_header:
                headers[provider.storage_class_header] = self.storage_class
                # TODO - What if provider doesn't support reduced reduncancy?
                # What if different providers provide different classes?
        if hasattr(fp, 'name'):
            self.path = fp.name
        if self.bucket != None:
            if not md5 and provider.supports_chunked_transfer():
                # defer md5 calculation to on the fly and
                # we don't know anything about size yet.
                chunked_transfer = True
                self.size = None
            else:
                chunked_transfer = False
                if isinstance(fp, KeyFile):
                    # Avoid EOF seek for KeyFile case as it's very inefficient.
                    key = fp.getkey()
                    size = key.size - fp.tell()
                    self.size = size
                    # At present both GCS and S3 use MD5 for the etag for
                    # non-multipart-uploaded objects. If the etag is 32 hex
                    # chars use it as an MD5, to avoid having to read the file
                    # twice while transferring.
                    if (re.match('^"[a-fA-F0-9]{32}"$', key.etag)):
                        etag = key.etag.strip('"')
                        md5 = (etag, base64.b64encode(binascii.unhexlify(etag)))
                if not md5:
                    # compute_md5() and also set self.size to actual
                    # size of the bytes read computing the md5.
                    md5 = self.compute_md5(fp, size)
                    # adjust size if required
                    size = self.size
                elif size:
                    self.size = size
                else:
                    # If md5 is provided, still need to size so
                    # calculate based on bytes to end of content
                    spos = fp.tell()
                    fp.seek(0, os.SEEK_END)
                    self.size = fp.tell() - spos
                    fp.seek(spos)
                    size = self.size
                self.md5 = md5[0]
                self.base64md5 = md5[1]

            if self.name == None:
                self.name = self.md5
            if not replace:
                if self.bucket.lookup(self.name):
                    return

            self.send_file(fp, headers=headers, cb=cb, num_cb=num_cb,
                           query_args=query_args,
>                          chunked_transfer=chunked_transfer, size=size)

../../../../.virtualenvs/echo-compactor/lib/python2.7/site-packages/boto/s3/key.py:1121: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <Key: fireteam-test-lucian,foo>
fp = <StringIO.StringIO instance at 0x10d05ac68>
headers = {'Content-Length': '0', 'Content-MD5': '1B2M2Y8AsgTpgAmY7PhCfg==', 'Content-Type': 'application/octet-stream', 'Expect': '100-Continue', ...}
cb = None, num_cb = 10, query_args = None, chunked_transfer = False, size = 0

    def send_file(self, fp, headers=None, cb=None, num_cb=10,
                  query_args=None, chunked_transfer=False, size=None):
        """
            Upload a file to a key into a bucket on S3.

            :type fp: file
            :param fp: The file pointer to upload. The file pointer must
                point point at the offset from which you wish to upload.
                ie. if uploading the full file, it should point at the
                start of the file. Normally when a file is opened for
                reading, the fp will point at the first byte. See the
                bytes parameter below for more info.

            :type headers: dict
            :param headers: The headers to pass along with the PUT request

            :type cb: function
            :param cb: a callback function that will be called to report
                progress on the upload.  The callback should accept two
                integer parameters, the first representing the number of
                bytes that have been successfully transmitted to S3 and
                the second representing the size of the to be transmitted
                object.

            :type num_cb: int
            :param num_cb: (optional) If a callback is specified with the
                cb parameter this parameter determines the granularity of
                the callback by defining the maximum number of times the
                callback will be called during the file
                transfer. Providing a negative integer will cause your
                callback to be called with each buffer read.

            :type size: int
            :param size: (optional) The Maximum number of bytes to read
                from the file pointer (fp). This is useful when uploading
                a file in multiple parts where you are splitting the file
                up into different ranges to be uploaded. If not specified,
                the default behaviour is to read all bytes from the file
                pointer. Less bytes may be available.
            """
        provider = self.bucket.connection.provider
        try:
            spos = fp.tell()
        except IOError:
            spos = None
            self.read_from_stream = False

        def sender(http_conn, method, path, data, headers):
            # This function is called repeatedly for temporary retries
            # so we must be sure the file pointer is pointing at the
            # start of the data.
            if spos is not None and spos != fp.tell():
                fp.seek(spos)
            elif spos is None and self.read_from_stream:
                # if seek is not supported, and we've read from this
                # stream already, then we need to abort retries to
                # avoid setting bad data.
                raise provider.storage_data_error(
                    'Cannot retry failed request. fp does not support seeking.')

            http_conn.putrequest(method, path)
            for key in headers:
                http_conn.putheader(key, headers[key])
            http_conn.endheaders()

            # Calculate all MD5 checksums on the fly, if not already computed
            if not self.base64md5:
                m = md5()
            else:
                m = None

            save_debug = self.bucket.connection.debug
            self.bucket.connection.debug = 0
            # If the debuglevel < 3 we don't want to show connection
            # payload, so turn off HTTP connection-level debug output (to
            # be restored below).
            # Use the getattr approach to allow this to work in AppEngine.
            if getattr(http_conn, 'debuglevel', 0) < 3:
                http_conn.set_debuglevel(0)

            data_len = 0
            if cb:
                if size:
                    cb_size = size
                elif self.size:
                    cb_size = self.size
                else:
                    cb_size = 0
                if chunked_transfer and cb_size == 0:
                    # For chunked Transfer, we call the cb for every 1MB
                    # of data transferred, except when we know size.
                    cb_count = (1024 * 1024) / self.BufferSize
                elif num_cb > 1:
                    cb_count = int(math.ceil(cb_size / self.BufferSize / (num_cb - 1.0)))
                elif num_cb < 0:
                    cb_count = -1
                else:
                    cb_count = 0
                i = 0
                cb(data_len, cb_size)

            bytes_togo = size
            if bytes_togo and bytes_togo < self.BufferSize:
                chunk = fp.read(bytes_togo)
            else:
                chunk = fp.read(self.BufferSize)
            if spos is None:
                # read at least something from a non-seekable fp.
                self.read_from_stream = True
            while chunk:
                chunk_len = len(chunk)
                data_len += chunk_len
                if chunked_transfer:
                    http_conn.send('%x;\r\n' % chunk_len)
                    http_conn.send(chunk)
                    http_conn.send('\r\n')
                else:
                    http_conn.send(chunk)
                if m:
                    m.update(chunk)
                if bytes_togo:
                    bytes_togo -= chunk_len
                    if bytes_togo <= 0:
                        break
                if cb:
                    i += 1
                    if i == cb_count or cb_count == -1:
                        cb(data_len, cb_size)
                        i = 0
                if bytes_togo and bytes_togo < self.BufferSize:
                    chunk = fp.read(bytes_togo)
                else:
                    chunk = fp.read(self.BufferSize)

            self.size = data_len

            if m:
                # Use the chunked trailer for the digest
                hd = m.hexdigest()
                self.md5, self.base64md5 = self.get_md5_from_hexdigest(hd)

            if chunked_transfer:
                http_conn.send('0\r\n')
                    # http_conn.send("Content-MD5: %s\r\n" % self.base64md5)
                http_conn.send('\r\n')

            if cb and (cb_count <= 1 or i > 0) and data_len > 0:
                cb(data_len, cb_size)

            response = http_conn.getresponse()
            body = response.read()
            http_conn.set_debuglevel(save_debug)
            self.bucket.connection.debug = save_debug
            if ((response.status == 500 or response.status == 503 or
                    response.getheader('location')) and not chunked_transfer):
                # we'll try again.
                return response
            elif response.status >= 200 and response.status <= 299:
                self.etag = response.getheader('etag')
                if self.etag != '"%s"' % self.md5:
                    raise provider.storage_data_error(
                        'ETag from S3 did not match computed MD5')
                return response
            else:
                raise provider.storage_response_error(
                    response.status, response.reason, body)

        if not headers:
            headers = {}
        else:
            headers = headers.copy()
        headers['User-Agent'] = UserAgent
        if self.storage_class != 'STANDARD':
            headers[provider.storage_class_header] = self.storage_class
        if 'Content-Encoding' in headers:
            self.content_encoding = headers['Content-Encoding']
        if 'Content-Language' in headers:
            self.content_encoding = headers['Content-Language']
        if 'Content-Type' in headers:
            # Some use cases need to suppress sending of the Content-Type
            # header and depend on the receiving server to set the content
            # type. This can be achieved by setting headers['Content-Type']
            # to None when calling this method.
            if headers['Content-Type'] is None:
                # Delete null Content-Type value to skip sending that header.
                del headers['Content-Type']
            else:
                self.content_type = headers['Content-Type']
        elif self.path:
            self.content_type = mimetypes.guess_type(self.path)[0]
            if self.content_type == None:
                self.content_type = self.DefaultContentType
            headers['Content-Type'] = self.content_type
        else:
            headers['Content-Type'] = self.content_type
        if self.base64md5:
            headers['Content-MD5'] = self.base64md5
        if chunked_transfer:
            headers['Transfer-Encoding'] = 'chunked'
            #if not self.base64md5:
            #    headers['Trailer'] = "Content-MD5"
        else:
            headers['Content-Length'] = str(self.size)
        headers['Expect'] = '100-Continue'
        headers = boto.utils.merge_meta(headers, self.metadata, provider)
        resp = self.bucket.connection.make_request('PUT', self.bucket.name,
                                                   self.name, headers,
                                                   sender=sender,
>                                                  query_args=query_args)

../../../../.virtualenvs/echo-compactor/lib/python2.7/site-packages/boto/s3/key.py:827: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = S3Connection:s3.amazonaws.com, method = 'PUT'
bucket = 'fireteam-test-lucian', key = 'foo'
headers = {'Content-Length': '0', 'Content-MD5': '1B2M2Y8AsgTpgAmY7PhCfg==', 'Content-Type': 'application/octet-stream', 'Expect': '100-Continue', ...}
data = '', query_args = None, sender = <function sender at 0x10d0916e0>
override_num_retries = None

    def make_request(self, method, bucket='', key='', headers=None, data='',
            query_args=None, sender=None, override_num_retries=None):
        if isinstance(bucket, self.bucket_class):
            bucket = bucket.name
        if isinstance(key, Key):
            key = key.name
        path = self.calling_format.build_path_base(bucket, key)
        boto.log.debug('path=%s' % path)
        auth_path = self.calling_format.build_auth_path(bucket, key)
        boto.log.debug('auth_path=%s' % auth_path)
        host = self.calling_format.build_host(self.server_name(), bucket)
        if query_args:
            path += '?' + query_args
            boto.log.debug('path=%s' % path)
            auth_path += '?' + query_args
            boto.log.debug('auth_path=%s' % auth_path)
        return AWSAuthConnection.make_request(self, method, path, headers,
                data, host, auth_path, sender,
>               override_num_retries=override_num_retries)

../../../../.virtualenvs/echo-compactor/lib/python2.7/site-packages/boto/s3/connection.py:490: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = S3Connection:s3.amazonaws.com, method = 'PUT', path = '/foo'
headers = {'Content-Length': '0', 'Content-MD5': '1B2M2Y8AsgTpgAmY7PhCfg==', 'Content-Type': 'application/octet-stream', 'Expect': '100-Continue', ...}
data = '', host = 'fireteam-test-lucian.s3.amazonaws.com'
auth_path = '/fireteam-test-lucian/foo'
sender = <function sender at 0x10d0916e0>, override_num_retries = None
params = {}

    def make_request(self, method, path, headers=None, data='', host=None,
                     auth_path=None, sender=None, override_num_retries=None,
                     params=None):
        """Makes a request to the server, with stock multiple-retry logic."""
        if params is None:
            params = {}
        http_request = self.build_base_http_request(method, path, auth_path,
                                                    params, headers, data, host)
>       return self._mexe(http_request, sender, override_num_retries)

../../../../.virtualenvs/echo-compactor/lib/python2.7/site-packages/boto/connection.py:932: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = S3Connection:s3.amazonaws.com
request = <boto.connection.HTTPRequest object at 0x10d092850>
sender = <function sender at 0x10d0916e0>, override_num_retries = None
retry_handler = None

    def _mexe(self, request, sender=None, override_num_retries=None,
              retry_handler=None):
        """
            mexe - Multi-execute inside a loop, retrying multiple times to handle
                   transient Internet errors by simply trying again.
                   Also handles redirects.

            This code was inspired by the S3Utils classes posted to the boto-users
            Google group by Larry Bates.  Thanks!

            """
        boto.log.debug('Method: %s' % request.method)
        boto.log.debug('Path: %s' % request.path)
        boto.log.debug('Data: %s' % request.body)
        boto.log.debug('Headers: %s' % request.headers)
        boto.log.debug('Host: %s' % request.host)
        response = None
        body = None
        e = None
        if override_num_retries is None:
            num_retries = config.getint('Boto', 'num_retries', self.num_retries)
        else:
            num_retries = override_num_retries
        i = 0
        connection = self.get_http_connection(request.host, self.is_secure)
        while i <= num_retries:
            # Use binary exponential backoff to desynchronize client requests.
            next_sleep = random.random() * (2 ** i)
            try:
                # we now re-sign each request before it is retried
                boto.log.debug('Token: %s' % self.provider.security_token)
                request.authorize(connection=self)
                if callable(sender):
                    response = sender(connection, request.method, request.path,
>                                     request.body, request.headers)

../../../../.virtualenvs/echo-compactor/lib/python2.7/site-packages/boto/connection.py:832: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

http_conn = <httplib.HTTPSConnection instance at 0x10d07e518>, method = 'PUT'
path = '/foo', data = ''
headers = {'Authorization': 'AWS AKIAIE37U5TPZHCR54HA:lVFOGGJoI8vGypGdsjzqUwxzYhM=', 'Content-Length': '0', 'Content-MD5': '1B2M2Y8AsgTpgAmY7PhCfg==', 'Content-Type': 'application/octet-stream', ...}

    def sender(http_conn, method, path, data, headers):
        # This function is called repeatedly for temporary retries
        # so we must be sure the file pointer is pointing at the
        # start of the data.
        if spos is not None and spos != fp.tell():
            fp.seek(spos)
        elif spos is None and self.read_from_stream:
            # if seek is not supported, and we've read from this
            # stream already, then we need to abort retries to
            # avoid setting bad data.
            raise provider.storage_data_error(
                'Cannot retry failed request. fp does not support seeking.')

        http_conn.putrequest(method, path)
        for key in headers:
            http_conn.putheader(key, headers[key])
        http_conn.endheaders()

        # Calculate all MD5 checksums on the fly, if not already computed
        if not self.base64md5:
            m = md5()
        else:
            m = None

        save_debug = self.bucket.connection.debug
        self.bucket.connection.debug = 0
        # If the debuglevel < 3 we don't want to show connection
        # payload, so turn off HTTP connection-level debug output (to
        # be restored below).
        # Use the getattr approach to allow this to work in AppEngine.
        if getattr(http_conn, 'debuglevel', 0) < 3:
            http_conn.set_debuglevel(0)

        data_len = 0
        if cb:
            if size:
                cb_size = size
            elif self.size:
                cb_size = self.size
            else:
                cb_size = 0
            if chunked_transfer and cb_size == 0:
                # For chunked Transfer, we call the cb for every 1MB
                # of data transferred, except when we know size.
                cb_count = (1024 * 1024) / self.BufferSize
            elif num_cb > 1:
                cb_count = int(math.ceil(cb_size / self.BufferSize / (num_cb - 1.0)))
            elif num_cb < 0:
                cb_count = -1
            else:
                cb_count = 0
            i = 0
            cb(data_len, cb_size)

        bytes_togo = size
        if bytes_togo and bytes_togo < self.BufferSize:
            chunk = fp.read(bytes_togo)
        else:
            chunk = fp.read(self.BufferSize)
        if spos is None:
            # read at least something from a non-seekable fp.
            self.read_from_stream = True
        while chunk:
            chunk_len = len(chunk)
            data_len += chunk_len
            if chunked_transfer:
                http_conn.send('%x;\r\n' % chunk_len)
                http_conn.send(chunk)
                http_conn.send('\r\n')
            else:
                http_conn.send(chunk)
            if m:
                m.update(chunk)
            if bytes_togo:
                bytes_togo -= chunk_len
                if bytes_togo <= 0:
                    break
            if cb:
                i += 1
                if i == cb_count or cb_count == -1:
                    cb(data_len, cb_size)
                    i = 0
            if bytes_togo and bytes_togo < self.BufferSize:
                chunk = fp.read(bytes_togo)
            else:
                chunk = fp.read(self.BufferSize)

        self.size = data_len

        if m:
            # Use the chunked trailer for the digest
            hd = m.hexdigest()
            self.md5, self.base64md5 = self.get_md5_from_hexdigest(hd)

        if chunked_transfer:
            http_conn.send('0\r\n')
                # http_conn.send("Content-MD5: %s\r\n" % self.base64md5)
            http_conn.send('\r\n')

        if cb and (cb_count <= 1 or i > 0) and data_len > 0:
            cb(data_len, cb_size)

>       response = http_conn.getresponse()

../../../../.virtualenvs/echo-compactor/lib/python2.7/site-packages/boto/s3/key.py:768: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <httplib.HTTPSConnection instance at 0x10d07e518>, buffering = False

    def getresponse(self, buffering=False):
        "Get the response from the server."

        # if a prior response has been completed, then forget about it.
        if self.__response and self.__response.isclosed():
            self.__response = None

        #
        # if a prior response exists, then it must be completed (otherwise, we
        # cannot read this response's header to determine the connection-close
        # behavior)
        #
        # note: if a prior response existed, but was connection-close, then the
        # socket and response were made independent of this HTTPConnection
        # object since a new request requires that we open a whole new
        # connection
        #
        # this means the prior response had one of two states:
        #   1) will_close: this connection was reset and the prior socket and
        #                  response operate independently
        #   2) persistent: the response was retained and we await its
        #                  isclosed() status to become true.
        #
        if self.__state != _CS_REQ_SENT or self.__response:
            raise ResponseNotReady()

        args = (self.sock,)
        kwds = {"strict":self.strict, "method":self._method}
        if self.debuglevel > 0:
            args += (self.debuglevel,)
        if buffering:
            #only add this keyword if non-default, for compatibility with
            #other response_classes.
            kwds["buffering"] = True;
>       response = self.response_class(*args, **kwds)

/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py:1025: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <boto.connection.HTTPResponse instance at 0x10d0b2b48>
args = (<moto.packages.httpretty.socket object at 0x10d092490>,)
kwargs = {'method': 'PUT', 'strict': 0}

    def __init__(self, *args, **kwargs):
>       httplib.HTTPResponse.__init__(self, *args, **kwargs)

../../../../.virtualenvs/echo-compactor/lib/python2.7/site-packages/boto/connection.py:389: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <boto.connection.HTTPResponse instance at 0x10d0b2b48>
sock = <moto.packages.httpretty.socket object at 0x10d092490>, debuglevel = 0
strict = 0, method = 'PUT', buffering = False

    def __init__(self, sock, debuglevel=0, strict=0, method=None, buffering=False):
        if buffering:
            # The caller won't be using any sock.recv() calls, so buffering
            # is fine and recommended for performance.
            self.fp = sock.makefile('rb')
        else:
            # The buffer size is specified as zero, because the headers of
            # the response are read with readline().  If the reads were
            # buffered the readline() calls could consume some of the
            # response, which make be read via a recv() on the underlying
            # socket.
>           self.fp = sock.makefile('rb', 0)

/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py:346: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <moto.packages.httpretty.socket object at 0x10d092490>, mode = 'rb'
bufsize = 0

    def makefile(self, mode='r', bufsize=-1):
        self._mode = mode
        self._bufsize = bufsize

        if self._entry:
>           self._entry.fill_filekind(self.fd, self._request)

../../../../.virtualenvs/echo-compactor/src/moto/moto/packages/httpretty.py:262: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <[TypeError("__repr__ returned non-string (type NoneType)") raised in repr()] SafeRepr object at 0x10d3d0488>
fk = <moto.packages.httpretty.FakeSockFile instance at 0x10d0930e0>
request = <[TypeError("__repr__ returned non-string (type NoneType)") raised in repr()] SafeRepr object at 0x10d3d0710>

    def fill_filekind(self, fk, request):
        now = datetime.utcnow()

        headers = {
            'status': self.status,
            'date': now.strftime('%a, %d %b %Y %H:%M:%S GMT'),
            'server': 'Python/HTTPretty',
            'connection': 'close',
        }

        if self.forcing_headers:
            headers = self.forcing_headers

        if self.dynamic_response:
            req_info, req_body, req_headers = request
            response = self.body(req_info, self.method, req_body, req_headers)
            if isinstance(response, basestring):
                body = response
            else:
>               body, new_headers = response
E               TypeError: 'NoneType' object is not iterable

../../../../.virtualenvs/echo-compactor/src/moto/moto/packages/httpretty.py:562: TypeError
===================== 8 tests deselected by '-ktest_touch' =====================
==================== 1 failed, 8 deselected in 0.83 seconds ====================

Large int values as hash_key can not be queried

I have very large int values as hash keys and moto fails to find them using query, I don't know if this is a moto or boto error, my error looks like below, not that in like 46 PK_MID is 444881095079499277 and I just inserted it and if do a scan and print it is there, and if I do a manual comparison it matches:

2) ERROR: test_postSimpleMessage (test_CSMessage.TestPostMessage)

   Traceback (most recent call last):
    /usr/local/Cellar/python/2.7.4/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/moto-0.1.4-py2.7.egg/moto/core/models.py line 47 in wrapper
      result = func(*args, **kwargs)
    test_CSMessage.py line 46 in test_postSimpleMessage
  mitem = table.get_item(hash_key=i['PK_MID'])
    /usr/local/lib/python2.7/site-packages/boto/dynamodb/table.py line 287 in get_item
  item_class)
    /usr/local/lib/python2.7/site-packages/boto/dynamodb/layer2.py line 487 in get_item
  object_hook=self.dynamizer.decode)
    /usr/local/lib/python2.7/site-packages/boto/dynamodb/layer1.py line 307 in get_item
  object_hook=object_hook)
    /usr/local/lib/python2.7/site-packages/boto/dynamodb/layer1.py line 118 in make_request
  retry_handler=self._retry_handler)
    /usr/local/lib/python2.7/site-packages/boto/connection.py line 845 in _mexe
  status = retry_handler(response, i, next_sleep)
    /usr/local/lib/python2.7/site-packages/boto/dynamodb/layer1.py line 158 in _retry_handler
  data)
   DynamoDBResponseError: DynamoDBResponseError: 400 Bad Request
   {u'__type': u'com.amazonaws.dynamodb.v20111205#ResourceNotFoundException'}

Monkeypatch as an argument to test function throws TypeError

import pytest
@mock_dynamodb
def test_query_dynamodb_test_exception_responseerror(monkeypatch):
#using monkeypatch from pytest
pass

On running using pytest:
==================================================================================== FAILURES =====================================================================================
________________________________________________________________ test_query_dynamodb_test_exception_responseerror _________________________________________________________________

args = (), kwargs = {}

def wrapper(*args, **kwargs):
    with self:
        result = func(*args, **kwargs)

E TypeError: test_query_dynamodb_test_exception_responseerror() takes exactly 1 argument (0 given)
/opt/pkg-cache/packages/Moto/Moto-0.2.7.3.0/RHEL5_64/DEV.STD.PTHREAD/build/lib/python2.7/site-packages/moto/core/models.py:47: TypeError

The above function 'test_query_dynamodb_test_exception_responseerror' runs fine if no argument is passed to it.

httpretty version error

Source in ./build/httpretty has version 0.6.0 that conflicts with httpretty==0.6.0a (from moto==0.2.4->-r requirements.txt (line 18))

DynamoDB's table.has_item('item_del_x', 100)) throws ResourceNotFoundException

According to the boto docs, table.has_item should return True/False, but instead DynamoDBResponseError exception is thrown.

Here is the full stack:

Traceback (most recent call last):
  File "/home/dilshod/source-code/somecode/Node/site-packages/somelib/tests/dynamodb-test.py", line 270, in test_item_delete
    self.assertFalse(table1.has_item('item_del_x', 100))
  File "/home/dilshod/source-code/somecode/Node/site-packages/somelib/dynamodb.py", line 250, in has_item
    return self.btable.has_item(hash_key, range_key)
  File "/home/dilshod/pyenv/somecode-node/local/lib/python2.7/site-packages/boto/dynamodb/table.py", line 322, in has_item
    consistent_read=consistent_read)
  File "/home/dilshod/pyenv/somecode-node/local/lib/python2.7/site-packages/boto/dynamodb/table.py", line 287, in get_item
    item_class)
  File "/home/dilshod/pyenv/somecode-node/local/lib/python2.7/site-packages/boto/dynamodb/layer2.py", line 487, in get_item
    object_hook=self.dynamizer.decode)
  File "/home/dilshod/pyenv/somecode-node/local/lib/python2.7/site-packages/boto/dynamodb/layer1.py", line 307, in get_item
    object_hook=object_hook)
  File "/home/dilshod/pyenv/somecode-node/local/lib/python2.7/site-packages/boto/dynamodb/layer1.py", line 118, in make_request
    retry_handler=self._retry_handler)
  File "/home/dilshod/pyenv/somecode-node/local/lib/python2.7/site-packages/boto/connection.py", line 845, in _mexe
    status = retry_handler(response, i, next_sleep)
  File "/home/dilshod/pyenv/somecode-node/local/lib/python2.7/site-packages/boto/dynamodb/layer1.py", line 158, in _retry_handler
    data)
DynamoDBResponseError: DynamoDBResponseError: 400 Bad Request
{u'__type': u'com.amazonaws.dynamodb.v20111205#ResourceNotFoundException'}

SQS reads should remove the message from the queue

An SQS message read should move the message to a hidden 'invisible queue'. The message should then only be removed from that queue if a delete is called.

It's not immediately clear if any sort of invisibility timeout should actually be implemented.

AttributeError: 'DynamoHandler' object has no attribute 'update_item'

Getting the following error when executing update_item on an item.

ERROR: test_item_attribute_operations (dynamodb-test.AWSDynamoDBTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/dilshod/source-code/somecode/star/site-packages/somelib/tests/dynamodb-test.py", line 222, in test_item_attribute_operations
    itemx.save()
  File "/home/dilshod/pyenv/somecode-star/local/lib/python2.7/site-packages/boto/dynamodb/item.py", line 141, in save
    return_values)
  File "/home/dilshod/source-code/somecode/star/site-packages/somelib/dynamodb.py", line 348, in update_item
    return_values)
  File "/home/dilshod/source-code/somecode/star/site-packages/somelib/dynamodb.py", line 333, in _item_action
    action(item, *args)
  File "/home/dilshod/pyenv/somecode-star/local/lib/python2.7/site-packages/boto/dynamodb/layer2.py", line 590, in update_item
    object_hook=self.dynamizer.decode)
  File "/home/dilshod/pyenv/somecode-star/local/lib/python2.7/site-packages/boto/dynamodb/layer1.py", line 424, in update_item
    object_hook=object_hook)
  File "/home/dilshod/pyenv/somecode-star/local/lib/python2.7/site-packages/boto/dynamodb/layer1.py", line 118, in make_request
    retry_handler=self._retry_handler)
  File "/home/dilshod/pyenv/somecode-star/local/lib/python2.7/site-packages/boto/connection.py", line 836, in _mexe
    response = connection.getresponse()
  File "/usr/lib/python2.7/httplib.py", line 1032, in getresponse
    response = self.response_class(*args, **kwds)
  File "/home/dilshod/pyenv/somecode-star/local/lib/python2.7/site-packages/boto/connection.py", line 389, in __init__
    httplib.HTTPResponse.__init__(self, *args, **kwargs)
  File "/usr/lib/python2.7/httplib.py", line 346, in __init__
    self.fp = sock.makefile('rb', 0)
  File "/home/dilshod/pyenv/somecode-star/local/lib/python2.7/site-packages/moto/packages/httpretty.py", line 262, in makefile
    self._entry.fill_filekind(self.fd, self._request)
  File "/home/dilshod/pyenv/somecode-star/local/lib/python2.7/site-packages/moto/packages/httpretty.py", line 558, in fill_filekind
    response = self.body(req_info, self.method, req_body, req_headers)
  File "/home/dilshod/pyenv/somecode-star/local/lib/python2.7/site-packages/moto/dynamodb/responses.py", line 300, in handler
    return DynamoHandler(uri, method, body, headers_to_dict(headers)).dispatch()
  File "/home/dilshod/pyenv/somecode-star/local/lib/python2.7/site-packages/moto/dynamodb/responses.py", line 59, in dispatch
    return getattr(self, endpoint)(self.uri, self.method, self.body, self.headers)
AttributeError: 'DynamoHandler' object has no attribute 'update_item'

moto hangs if there is a dot in a bucket name

The following code demonstrates that moto will hang if there is a dot in the bucket name.

import boto
from boto.s3.key import Key
from moto import mock_s3

#BUG: hangs if there is a dot in the bucket name
BUCKET = 'firstname.lastname'

@mock_s3
def main():

    conn = boto.connect_s3()
    conn.create_bucket(BUCKET)
    bucket = conn.get_bucket(BUCKET)


    k = Key(bucket, 'somekey')
    k.set_contents_from_string('somedata')


if __name__ == '__main__':
    main()

KeyError is raised instead of receiving None when calling get_queue

This is a test script:

import boto.sqs
from moto import mock_sqs

def test_get_queue():
    region = 'us-west-2'
    conn = boto.sqs.connect_to_region(region)

    nonexisting = conn.get_queue('nonexisting')

    print nonexisting


print 'Boto prints this:'
test_get_queue()

print 'Moto throws this exception:'
with mock_sqs():
    test_get_queue()


Here is the result:

Boto prints this:
None
Moto throws this exception:
Traceback (most recent call last):
  File "/home/dilshod/Desktop/moto_get_queue.py", line 21, in <module>
    test_get_queue()
  File "/home/dilshod/Desktop/moto_get_queue.py", line 11, in test_get_queue
    nonexisting = conn.get_queue('nonexisting')
  File "/home/dilshod/pyenv/some-code/local/lib/python2.7/site-packages/boto/sqs/connection.py", line 351, in get_queue
    return self.get_object('GetQueueUrl', params, Queue)
  File "/home/dilshod/pyenv/some-code/local/lib/python2.7/site-packages/boto/connection.py", line 1053, in get_object
    response = self.make_request(action, params, path, verb)
  File "/home/dilshod/pyenv/some-code/local/lib/python2.7/site-packages/boto/connection.py", line 979, in make_request
    return self._mexe(http_request)
  File "/home/dilshod/pyenv/some-code/local/lib/python2.7/site-packages/boto/connection.py", line 841, in _mexe
    response = connection.getresponse()
  File "/usr/lib/python2.7/httplib.py", line 1032, in getresponse
    response = self.response_class(*args, **kwds)
  File "/home/dilshod/pyenv/some-code/local/lib/python2.7/site-packages/boto/connection.py", line 389, in __init__
    httplib.HTTPResponse.__init__(self, *args, **kwargs)
  File "/usr/lib/python2.7/httplib.py", line 346, in __init__
    self.fp = sock.makefile('rb', 0)
  File "/home/dilshod/pyenv/some-code/local/lib/python2.7/site-packages/moto-0.1.4-py2.7.egg/moto/packages/httpretty.py", line 262, in makefile
    self._entry.fill_filekind(self.fd, self._request)
  File "/home/dilshod/pyenv/some-code/local/lib/python2.7/site-packages/moto-0.1.4-py2.7.egg/moto/packages/httpretty.py", line 558, in fill_filekind
    response = self.body(req_info, self.method, req_body, req_headers)
  File "/home/dilshod/pyenv/some-code/local/lib/python2.7/site-packages/moto-0.1.4-py2.7.egg/moto/core/responses.py", line 25, in dispatch
    return method()
  File "/home/dilshod/pyenv/some-code/local/lib/python2.7/site-packages/moto-0.1.4-py2.7.egg/moto/sqs/responses.py", line 22, in get_queue_url
    queue = sqs_backend.get_queue(queue_name)
  File "/home/dilshod/pyenv/some-code/local/lib/python2.7/site-packages/moto-0.1.4-py2.7.egg/moto/sqs/models.py", line 80, in get_queue
    return self.queues[queue_name]
KeyError: 'nonexisting'

ValueError: Not a Request-Line

if you run the following code with "size = 100000", you'll get "ValueError: Not a Request-Line" error. However if "size = 100", then there is no error.

import boto
import moto
import tempfile

mock = moto.mock_s3()
mock.start()
conn = boto.connect_s3()
BUCKET = 'test_bucket'
bucket = conn.create_bucket(BUCKET)
# 1MB is max size of object 
size = 100000
contents = 'abcdefghij' * size
f = tempfile.NamedTemporaryFile(delete=False)
f.write(contents)
f.close()
f = open(f.name, 'rb')
key = boto.s3.key.Key(bucket, f.name)
key.set_contents_from_file(f)
mock.stop()
  File "/home/dilshod/pyenv/apollo-node/local/lib/python2.7/site-packages/boto/s3/key.py", line 1121, in set_contents_from_file
    chunked_transfer=chunked_transfer, size=size)
  File "/home/dilshod/pyenv/apollo-node/local/lib/python2.7/site-packages/boto/s3/key.py", line 827, in send_file
    query_args=query_args)
  File "/home/dilshod/pyenv/apollo-node/local/lib/python2.7/site-packages/boto/s3/connection.py", line 490, in make_request
    override_num_retries=override_num_retries)
  File "/home/dilshod/pyenv/apollo-node/local/lib/python2.7/site-packages/boto/connection.py", line 932, in make_request
    return self._mexe(http_request, sender, override_num_retries)
  File "/home/dilshod/pyenv/apollo-node/local/lib/python2.7/site-packages/boto/connection.py", line 832, in _mexe
    request.body, request.headers)
  File "/home/dilshod/pyenv/apollo-node/local/lib/python2.7/site-packages/boto/s3/key.py", line 736, in sender
    http_conn.send(chunk)
  File "/usr/lib/python2.7/httplib.py", line 794, in send
    self.sock.sendall(data)
  File "/home/dilshod/pyenv/apollo-node/local/lib/python2.7/site-packages/moto/packages/httpretty.py", line 300, in sendall
    method, path, version = parse_requestline(headers)
  File "/home/dilshod/pyenv/apollo-node/local/lib/python2.7/site-packages/moto/packages/httpretty.py", line 151, in parse_requestline
    raise ValueError('Not a Request-Line')
ValueError: Not a Request-Line

Multiple region support

We'd like to support multiple regions. For example, an SNS instance could pass messages to SQS instances located in different regions.

Key.last_modified is None

On all keys I create, last_modified appears to be None, instead of the dates boto exposes.

Btw, from what I've seen:

For listing, the format is iso8601 without a timezone
For keys, the format is HTTP Date (RFC 822) with a GMT timezone

404 response on PUT (create_bucket) request to stand-alone S3 server

Upon launching a moto server as follows:

moto_server s3 --port 23222

I attempt to create a bucket:

from boto.s3.connection import S3Connection, OrdinaryCallingFormat

conn = S3Connection('123', 'abc', is_secure=False, port=23222,
    host='localhost', calling_format=OrdinaryCallingFormat())
conn.create_bucket('test')

Only to get the following response:

boto.exception.S3ResponseError: S3ResponseError: 404 NOT FOUND
None

The server output:

127.0.0.1 - - [25/Oct/2013 15:59:22] "PUT /test/ HTTP/1.1" 404 -

As far as I know, this PUT request should not respond with a 404.

In moto/s3/responses.py, the section of the function _bucket_response that handles method == 'PUT' requests seems to not return a status code as all other blocks do in this function.

The PUT block (notice no status code):

elif method == 'PUT':
    ...
    return template.render(bucket=new_bucket)

For another block (DELETE; notice the 204 code):

elif method == 'DELETE':
    ...
    elif removed_bucket:
        ...
        return 204, headers, template.render(bucket=removed_bucket)

I was unable to test if changing the return statement in PUT to the following works, but it's my best guess for what's going wrong here:

elif method == 'PUT':
    ...
    return 204, headers, template.render(bucket=removed_bucket)

Perhaps someone more knowledgeable about this code can see if that change works or, if not, figure out why this is happening.

Python 2.6 support

I'm so happy that this project exists. Hooray for actually testing your cloud interfaces!

I finally got a chance to jump in and use it today, but I quickly realized python 2.6 isn't supported by running make test.

The first problem I ran in to was that python 2.6 doesn't have collections.OrderedDict. That's pretty easy though, with the backported library.

The next issue seems to be the prevelant use of automatic field numbering with str.format. For example '{}foo'.format('baz') instead of '{0}foo'.format('baz')
There does seem to be a 2.6 backport of the automatic field numbering functionality, but it might be easier to just add the numbers.

After that, I just see lots of test failures/errors that don't make a lot of sense to me without diving in much farther.

Do you have any interest in supporting python 2.6 for the project? If so, I might be able to spend a bit of time getting the obvious incompatibilities ironed out.

Thanks

-Wes

SQS not accepting messages in server

Here is the code I'm running to try and create a message.

import boto
import boto.sqs
import boto.connection

import json

boto.config.add_section('Boto')
boto.config.set('Boto', 'is_secure', 'false')
boto.sqs.regions = lambda: [boto.sqs.SQSRegionInfo(name='us-east-1', endpoint='127.0.0.1')]
boto.connection.PORTS_BY_SECURITY = {False: 5001}
conn = boto.sqs.connect_to_region('us-east-1')
queue  = conn.create_queue(queue_name)
message = boto.sqs.message.Message()
message.set_body(json.dumps({
    'id': 'LD1234',
    'url': 'http://yipit.co',
    'business': 'http://business.url',
}))
queue.write(message)

moto stack trace:

127.0.0.1 - - [20/Dec/2013 16:30:33] "POST /123456789012/blah HTTP/1.1" 500 -
Error on request:
Traceback (most recent call last):
  File "/Users/zach/.virtualenvs/yipit_ph/lib/python2.7/site-packages/werkzeug/serving.py", line 177, in run_wsgi
    execute(self.server.app)
  File "/Users/zach/.virtualenvs/yipit_ph/lib/python2.7/site-packages/werkzeug/serving.py", line 165, in execute
    application_iter = app(environ, start_response)
  File "/Users/zach/.virtualenvs/yipit_ph/lib/python2.7/site-packages/flask/app.py", line 1836, in __call__
    return self.wsgi_app(environ, start_response)
  File "/Users/zach/.virtualenvs/yipit_ph/lib/python2.7/site-packages/flask/app.py", line 1820, in wsgi_app
    response = self.make_response(self.handle_exception(e))
  File "/Users/zach/.virtualenvs/yipit_ph/lib/python2.7/site-packages/flask/app.py", line 1403, in handle_exception
    reraise(exc_type, exc_value, tb)
  File "/Users/zach/.virtualenvs/yipit_ph/lib/python2.7/site-packages/flask/app.py", line 1817, in wsgi_app
    response = self.full_dispatch_request()
  File "/Users/zach/.virtualenvs/yipit_ph/lib/python2.7/site-packages/flask/app.py", line 1477, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/Users/zach/.virtualenvs/yipit_ph/lib/python2.7/site-packages/flask/app.py", line 1381, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "/Users/zach/.virtualenvs/yipit_ph/lib/python2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request
    rv = self.dispatch_request()
  File "/Users/zach/.virtualenvs/yipit_ph/lib/python2.7/site-packages/flask/app.py", line 1461, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "/Users/zach/.virtualenvs/yipit_ph/lib/python2.7/site-packages/moto/core/utils.py", line 68, in __call__
    result = self.callback(request, request.url, headers)
  File "/Users/zach/.virtualenvs/yipit_ph/lib/python2.7/site-packages/moto/core/responses.py", line 32, in dispatch
    return self.call_action()
  File "/Users/zach/.virtualenvs/yipit_ph/lib/python2.7/site-packages/moto/core/responses.py", line 49, in call_action
    raise NotImplementedError("The {0} action has not been implemented".format(action))
NotImplementedError: The  action has not been implemented

Assumes moto server is running on port 5001

moto==0.2.11
boto==2.19.0

key names in bucket listing are truncated

According to this documentation the following code should print topdirectory/secondlevel/thirdlevel/leaf instead of topdirectory/secondlevel

import boto
import moto
import tempfile

mock = moto.mock_s3()
mock.start()

conn = boto.connect_s3()

BUCKET = 'test_bucket_name'

bucket = conn.create_bucket(BUCKET)


key = boto.s3.key.Key(bucket, 'topdirectory/secondlevel/thirdlevel/leaf')
key.set_contents_from_string('value1')


keys = bucket.list('topdirectory/')

for key in keys:
    print key.name


mock.stop()

Implement remaining camelcase_attributes in Queue class

Currently only 2 attributes are implemented: 'VisibilityTimeout', 'ApproximateNumberOfMessages'.

The remaining attributes with defaults:

'CreatedTimestamp': now,
'LastModifiedTimestamp': now,
'VisibilityTimeout': 30,
'DelaySeconds': 0,
'MaximumMessageSize': 64 << 10,
'MessageRetentionPeriod': 86400 * 4,  # four days
'ApproximateNumberOfMessages': 0,
'ApproximateNumberOfMessagesNotVisible': 0,
'ApproximateNumberOfMessagesDelayed': 0,
'ReceiveMessageWaitTimeSeconds': 0, 
'QueueArn': 'arn:aws:sqs:us-west-2:271670977810:testqueue1'

Tag Support for Subnets

Most of the sample AWS responses have empty tags, it would be nice to bolster these by adding some tags and update the templates to include them. Normally moto returns <tagSet/> as that is the default. However, the tags can be properly mocked out with something similar to

<tagSet>
    <item>
        <key>Tag1</key>
        <value>Foo</value>
    </item>
    <item>
        <key>MyName</key>
        <value>Bar</value>
    </item>
</tagSet>

I will see what I can do to add this to a PR.

S3: Key.size is None unless the key's contents have been gotten

I ran the following tests (one line different between them; k2.get_contents_as_string):

@mock_s3
def test_key_size(self):
    conn = boto.connect_s3('the_key', 'the_secret')
    b = conn.create_bucket('test')
    k = Key(b, 'hi')
    k.set_contents_from_string('hello')
    k2 = Key(b, 'hi')
    self.assertEqual(k2.size, 5)

@mock_s3
def test_key_size_2(self):
    conn = boto.connect_s3('the_key', 'the_secret')
    b = conn.create_bucket('test')
    k = Key(b, 'hi')
    k.set_contents_from_string('hello')
    k2 = Key(b, 'hi')
    k2.get_contents_as_string()
    self.assertEqual(k2.size, 5)

The first test fails because k2.size is None. The second test passes. As this shows, the Key.size field is only filled out after getting the contents of the key.

I am unsure whether this is related somehow to #59.

key.size only correct after reading

When getting the size of a key that was just fetched from a bucket, the size reports 0 until, as far as I can tell, read the keys data then size will report correctly. Example below.

boto==2.14.0

    b = conn.get_bucket("mybucket")
    k = b.get_key("steve")

    assert k.size == 0
    k.get_contents_as_string()
    assert k.size == 10

SQS doesn't round trip messages

I was playing around with SQS fakes and I ran into an issue with what I believe is a base64 encoding issue. I'm writing a message to SQS and then trying to read back the same message:

Real SQS:

>>> import boto
>>> sqs = boto.connect_sqs()
>>> q = sqs.create_queue('testqueue')
>>> q.write(q.new_message('foo bar baz'))
<boto.sqs.message.Message instance at 0x1023f2368>
>>> message = q.read(1)
>>> print(message.get_body())
foo bar baz

Using moto:

>>> import moto
>>> patch = moto.mock_sqs()
>>> patch.start()
>>> import boto
>>> sqs = boto.connect_sqs()
>>> q = sqs.create_queue('testqueue')
>>> q.write(q.new_message('foo bar baz'))
<boto.sqs.message.Message instance at 0x10f79c680>
>>> message = q.read(1)
>>> print(message.get_body())
Zm9vIGJhciBiYXo=

Awesome project btw!

EMR support

Any plan to support EMR in the near future?

dynamo query with hash key returns wrong results

When executing a query in dynamodb wrong result is returned

Step to reproduce:
add an insert to the default query_test in the unit tests:

@mock_dynamodb
def test_query():
    conn = boto.connect_dynamodb()
    table = create_table(conn)

    item_data = {
        'Body': 'http://url_to_lolcat.gif',
        'SentBy': 'User A',
        'ReceivedTime': '12/9/2011 11:36:03 PM',
    }
    # the extra item
    item = table.new_item(
        hash_key='the-key1',
        range_key='4561',
        attrs=item_data,
    )
    item.put()

    item = table.new_item(
        hash_key='the-key',
        range_key='456',
        attrs=item_data,
    )
    item.put()

    item = table.new_item(
        hash_key='the-key',
        range_key='123',
        attrs=item_data,
    )
    item.put()

    item = table.new_item(
        hash_key='the-key',
        range_key='789',
        attrs=item_data,
    )
    item.put()

    results = table.query(hash_key='the-key', range_key_condition=condition.GT('1'))
    results.response['Items'].should.have.length_of(3)

This new test will fail as the library will return 4 results in stead of 3

boto create_table hangs if connected with boto.dynamodb.connect_to_region

The following will hang if connection is created with boto.dynamodb.connect_to_region but totally fine if connected with boto.connect_dynamodb.

import boto
from moto import mock_dynamodb
import avalib

PREFIX = 'test'

mock = mock_dynamodb()
mock.start()

# this will work if connected with boto.connect_dynamodb()
dynamodb = boto.dynamodb.connect_to_region('us-west-2')

schema = dynamodb.create_schema('column1', str(), 'column2', int())

dynamodb.create_table('table1', schema, 200, 200)        

requesting to implement get_queue_url

Getting this error when attempt to call get_queue: NotImplementedError: The get_queue_url action has not been implemented

Here is the error stack:

Traceback (most recent call last):
  File "/home/dilshod/source-code/Apollo/Node/site-packages/avalib/tests/sqs-test.py", line 44, in test_get_queue
    queue1 = self.sqs.get_queue('queue1')
  File "/home/dilshod/source-code/Apollo/Node/site-packages/avalib/sqs.py", line 152, in get_queue
    bqueue = self.connection.get_queue(bname)
  File "/home/dilshod/pyenv/apollo-node/local/lib/python2.7/site-packages/boto/sqs/connection.py", line 355, in get_queue
    return self.get_object('GetQueueUrl', params, Queue)
  File "/home/dilshod/pyenv/apollo-node/local/lib/python2.7/site-packages/boto/connection.py", line 1048, in get_object
    response = self.make_request(action, params, path, verb)
  File "/home/dilshod/pyenv/apollo-node/local/lib/python2.7/site-packages/boto/connection.py", line 974, in make_request
    return self._mexe(http_request)
  File "/home/dilshod/pyenv/apollo-node/local/lib/python2.7/site-packages/boto/connection.py", line 836, in _mexe
    response = connection.getresponse()
  File "/usr/lib/python2.7/httplib.py", line 1032, in getresponse
    response = self.response_class(*args, **kwds)
  File "/home/dilshod/pyenv/apollo-node/local/lib/python2.7/site-packages/boto/connection.py", line 389, in __init__
    httplib.HTTPResponse.__init__(self, *args, **kwargs)
  File "/usr/lib/python2.7/httplib.py", line 346, in __init__
    self.fp = sock.makefile('rb', 0)
  File "/home/dilshod/pyenv/apollo-node/local/lib/python2.7/site-packages/moto/packages/httpretty.py", line 262, in makefile
    self._entry.fill_filekind(self.fd, self._request)
  File "/home/dilshod/pyenv/apollo-node/local/lib/python2.7/site-packages/moto/packages/httpretty.py", line 558, in fill_filekind
    response = self.body(req_info, self.method, req_body, req_headers)
  File "/home/dilshod/pyenv/apollo-node/local/lib/python2.7/site-packages/moto/core/responses.py", line 26, in dispatch
    raise NotImplementedError("The {} action has not been implemented".format(action))
NotImplementedError: The get_queue_url action has not been implemented

moto_server stack traces when doing a put without a bucket name

This is using moto 0.2.9. I started moto like this:
moto_server -p 25001 s3

I then did this curl request:
curl -X PUT http://localhost:25001/mybucket

And watched the server have this stacktrace:

127.0.0.1 - - [25/Oct/2013 19:49:19] "PUT /mybucket HTTP/1.1" 500 -
Error on request:
Traceback (most recent call last):
File "/aux0/brock/moto/python/lib/python2.7/site-packages/werkzeug/serving.py", line 177, in run_wsgi
execute(self.server.app)
File "/aux0/brock/moto/python/lib/python2.7/site-packages/werkzeug/serving.py", line 165, in execute
application_iter = app(environ, start_response)
File "/aux0/brock/moto/python/lib/python2.7/site-packages/flask/app.py", line 1836, in call
return self.wsgi_app(environ, start_response)
File "/aux0/brock/moto/python/lib/python2.7/site-packages/flask/app.py", line 1820, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/aux0/brock/moto/python/lib/python2.7/site-packages/flask/app.py", line 1403, in handle_exception
reraise(exc_type, exc_value, tb)
File "/aux0/brock/moto/python/lib/python2.7/site-packages/flask/app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "/aux0/brock/moto/python/lib/python2.7/site-packages/flask/app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/aux0/brock/moto/python/lib/python2.7/site-packages/flask/app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/aux0/brock/moto/python/lib/python2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/aux0/brock/moto/python/lib/python2.7/site-packages/flask/app.py", line 1461, in dispatch_request
return self.view_functionsrule.endpoint
File "/aux0/brock/moto/python/lib/python2.7/site-packages/moto/core/utils.py", line 68, in call
result = self.callback(request, request.url, headers)
File "/aux0/brock/moto/python/lib/python2.7/site-packages/moto/s3/responses.py", line 102, in key_response
response = _key_response(request, full_url, headers)
File "/aux0/brock/moto/python/lib/python2.7/site-packages/moto/s3/responses.py", line 147, in _key_response
new_key = s3_backend.set_key(bucket_name, key_name, body)
File "/aux0/brock/moto/python/lib/python2.7/site-packages/moto/s3/models.py", line 90, in set_key
bucket = self.buckets[bucket_name]
KeyError: None

directory listing with `/` delimiter is not proper listed.

I run one additional test and it appears this case still fails ( prefix+'x'):

real s3 using boto:

[u'toplevel/x/key', u'toplevel/x/y/key', u'toplevel/x/y/z/key']
[u'toplevel/x/']

mocked s3 using moto

[u'toplevel/x/key', u'toplevel/x/y/key', u'toplevel/x/y/z/key']
[u'toplevel/x/key', u'toplevel/xy/']
import boto
from boto.s3.key import Key
from moto import mock_s3

BUCKET = 'dilshodtest1bucket'
prefix = 'toplevel/'

def main():

    conn = boto.connect_s3()
    conn.create_bucket(BUCKET)
    bucket = conn.get_bucket(BUCKET)

    def store(name):
        k = Key(bucket, prefix + name)
        k.set_contents_from_string('somedata')

    names = ['x/key', 'y.key1', 'y.key2', 'y.key3', 'x/y/key', 'x/y/z/key']

    for name in names:
        store(name)

    delimiter = None

    keys = [x.name for x in bucket.list(prefix+'x', delimiter)]

    print keys

    delimiter = '/'
    keys = [x.name for x in bucket.list(prefix+'x', delimiter)]

    print keys


if __name__ == '__main__':

    print 'real s3'

    main()

    print 'mocked s3'

    mock = mock_s3()
    mock.start()
    main()
    mock.stop()

Not able to retrieve the data after calling the save method

Test method:
 class TestS3Storage(base.TestCase):

    @mock_s3
    def test_simple(self):
        print "\n create s3-moto conn test_simple"
        conn = boto.connect_s3(self._cfg.s3_access_key, self._cfg.s3_secret_key)
        conn.create_bucket(self._cfg.s3_bucket)

        filename = self.gen_random_string()
        content = self.gen_random_string()
        # test exists
        self.assertFalse(self._storage.exists(filename))
        self._storage.put_content(filename, content)
        print "----", conn.get_bucket(self._cfg.s3_bucket).get_key(filename), filename
        print "****", self._storage._s3_conn.get_bucket(self._cfg.s3_bucket).get_key(filename), filename
        self.assertTrue(self._storage.exists(filename))
        # test read / write
        ret = self._storage.get_content(filename)
        self.assertEqual(ret, content)
        # test size
        ret = self._storage.get_size(filename)
        self.assertEqual(ret, len(content))
        # test remove
        self._storage.remove(filename)
        self.assertFalse(self._storage.exists(filename))
Save function:
def put_content(self, path, content):
        path = self._init_path(path)
        key = boto.s3.key.Key(self._s3_bucket, path)
        key.set_contents_from_string(
            content, encrypt_key=(self._config.s3_encrypt is True))
        print "####", self._s3_conn.get_bucket(self._config.s3_bucket).get_key(path)
        return path
Output:
test_simple (test_s3.TestS3Storage) ...
 create s3-moto conn __init__

 create s3-moto conn test_simple

#### Key: foobar,tmp/test/85ebz930z6070ncl
---- None 85ebz930z6070ncl
**** None 85ebz930z6070ncl
FAIL

ec2.get_all_security_groups() does not return correct vpc_id

I'm creating a new security group with a vpc_id. The group that's returned has the correct id set, but if I call get_all_security_groups, vpc_id isn't set on the group I created.

ipdb> self.ec2.create_security_group('test', 'test', 'vpc_123')
SecurityGroup:test
ipdb> self.ec2.get_all_security_groups()
[SecurityGroup:test, SecurityGroup:test_vpc]
ipdb> self.ec2.get_all_security_groups()[0]
SecurityGroup:test
ipdb> self.ec2.get_all_security_groups()[0].vpc_id
''
ipdb> self.ec2.create_security_group('test1', 'test', 'vpc_123').vpc_id
'vpc_123'

keys with ? (question marks) encoded with %3F

Expecting the following result: test_list_keys_2/x?y but getting this instead test_list_keys_2/x%3Fy

import boto
import moto
import tempfile

mock = moto.mock_s3()
mock.start()

conn = boto.connect_s3()

BUCKET = 'test_bucket_name'

bucket = conn.create_bucket(BUCKET)


key = boto.s3.key.Key(bucket, 'test_list_keys_2/x?y')

key.set_contents_from_string('value1')


keys = bucket.list('test_list_keys_2/', '/')

for key in keys:
    print key.name


mock.stop()

S3 failure for buckets with dashes in their name

If I try to get a bucket with a name that includes a dash, this happens (instead of a 404).

tests/test_s3writer.py:10: in <module>
>   bucket = s3.get_bucket()
compactor/s3.py:19: in get_bucket
>       return connect_s3().get_bucket(config.AWS_BUCKET)
../../../../.virtualenvs/echo-compactor/lib/python2.7/site-packages/boto/s3/connection.py:409: in get_bucket
>           bucket.get_all_keys(headers, maxkeys=0)
../../../../.virtualenvs/echo-compactor/lib/python2.7/site-packages/boto/s3/bucket.py:371: in get_all_keys
>                            '', headers, **params)
../../../../.virtualenvs/echo-compactor/lib/python2.7/site-packages/boto/s3/bucket.py:334: in _get_all
>           xml.sax.parseString(body, h)
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/sax/__init__.py:49: in parseString
>       parser.parse(inpsrc)
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/sax/expatreader.py:107: in parse
>       xmlreader.IncrementalParser.parse(self, source)
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/sax/xmlreader.py:123: in parse
>           self.feed(buffer)
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/sax/expatreader.py:211: in feed
>           self._err_handler.fatalError(exc)
/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/sax/handler.py:38: in fatalError
>       raise exception
E       SAXParseException: <unknown>:1:0: not well-formed (invalid token)

Interference with other network code

I've found an interesting issue.

I've just replaced S3 locks in my program with Redis locks to reduce contention and they break on tests, because moto intercepts all sockets.

Perhaps it should only intercept sockets to amazon domains?

ELB support

I think it would be very useful to be able to mock ELB-mgmt API calls

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.