Git Product home page Git Product logo

aws-support-tools's Introduction

aws-support-tools

Tools and sample code provided by AWS Premium Support.

aws-support-tools's People

Contributors

aaalzand avatar abigan09 avatar abuehaze avatar clydemachine avatar danieldacosta avatar defila-aws avatar dmsmds avatar emanuelem avatar gautagan avatar harniva14 avatar harshdev93 avatar john-aws avatar joshua-at-aws avatar koorukuroo avatar kushagra1504 avatar lpierillas avatar lsida avatar mauricioharley avatar mdm373 avatar nick-from-aws avatar quasi-mod avatar rsavordelli avatar scoreunder avatar snehrao avatar ssamed avatar starkshaw avatar tawha avatar thefuz avatar timhll-amz avatar toredash avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-support-tools's Issues

Error while using this script with ses and lambda

No older events found at the moment. Retry.

10:09:52
START RequestId: fb6a0552-bb2a-11e8-b033-c3ce03582e7f Version: $LATEST
START RequestId: fb6a0552-bb2a-11e8-b033-c3ce03582e7f Version: $LATEST

10:09:52
'NoneType' object has no attribute 'strip' Aborting...
'NoneType' object has no attribute 'strip' Aborting...

10:09:52
'NoneType' object has no attribute 'strip': AttributeError Traceback (most recent call last): File "/var/task/lambda_function.py", line 127, in lambda_handler raise e AttributeError: 'NoneType' object has no attribute 'strip'
'NoneType' object has no attribute 'strip': AttributeError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 127, in lambda_handler
raise e
AttributeError: 'NoneType' object has no attribute 'strip'


10:09:52
END RequestId: fb6a0552-bb2a-11e8-b033-c3ce03582e7f
END RequestId: fb6a0552-bb2a-11e8-b033-c3ce03582e7f

10:09:52
REPORT RequestId: fb6a0552-bb2a-11e8-b033-c3ce03582e7f Duration: 275.54 ms Billed Duration: 300 ms Memory Size: 128 MB Max Memory Used: 49 MB
REPORT RequestId: fb6a0552-bb2a-11e8-b033-c3ce03582e7f Duration: 275.54 ms Billed Duration: 300 ms Memory Size: 128 MB Max Memory Used: 49 MB

SES Reports - Error with lambda - TypeError: Cannot read property 'source' of undefined

Hello i've followed the dashboard creation amazon ses reports Here on aws and i got an error when lambda try to read the SQS queue not empty. When the queue is empty don't have any errors.

Keys & Value config are the same as in : https://docs.aws.amazon.com/ses/latest/DeveloperGuide/dashboardcreatelambdafunction.html
QueueURl = good one no error in logs
Region = good one
ToAddr = verified email
SrcAddr = verified email too
Bucketname = existing one
BucketPrefix = folders existing in S3 bucket

If someone got an idea ? i just took the sesreports.zip and import it in lambda and using the keys/values as asked.

START RequestId: f8b3ab50-f167-11e8-8265-6bd008a84f1b Version: $LATEST
2018-11-26T10:42:29.152Z f8b3ab50-f167-11e8-8265-6bd008a84f1b Reading from: https://sqs.somewhereonaws.amazonaws.com/......../nameofmyqueue
2018-11-26T10:42:29.212Z f8b3ab50-f167-11e8-8265-6bd008a84f1b Reading queue, size = 7
2018-11-26T10:42:29.284Z f8b3ab50-f167-11e8-8265-6bd008a84f1b TypeError: Cannot read property 'source' of undefined
at Response.sqs.receiveMessage (/var/task/index.js:128:43)
at Request. (/var/runtime/node_modules/aws-sdk/lib/request.js:364:18)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:683:14)
at Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request. (/var/runtime/node_modules/aws-sdk/lib/request.js:38:9)
at Request. (/var/runtime/node_modules/aws-sdk/lib/request.js:685:12)
END RequestId: f8b3ab50-f167-11e8-8265-6bd008a84f1b
REPORT RequestId: f8b3ab50-f167-11e8-8265-6bd008a84f1b Duration: 183.58 ms Billed Duration: 200 ms Memory Size: 512 MB Max Memory Used: 35 MB
RequestId: f8b3ab50-f167-11e8-8265-6bd008a84f1b Process exited before completing request

c5_m5_checks_script.sh incorrectly results in error

In this script, check_NVMe_in_initrd implicitly depends on lsinitrd but it might be failed even if nvme driver has been loaded in the initramfs. I verified with "Amazon Linux AMI 2018.03.0" as follows.

$ curl http://169.254.169.254/latest/meta-data/instance-type
t3.micro

$ ./c5_m5_checks_script.sh 
------------------------------------------------

OK     NVMe Module is installed and available on your instance


ERROR  NVMe Module is not loaded in the initramfs image.
	- Please run the following command on your instance to recreate initramfs:
	# sudo dracut -f -v


OK     ENA Module with version 2.1.1g is installed and available on your instance


OK     fstab file looks fine and does not contain any device names. 

------------------------------------------------

$ lsinitrd /boot/initramfs-$(uname -r).img
Image: /boot/initramfs-4.14.138-89.102.amzn1.x86_64.img: 16M
========================================================================
========================================================================
drwxr-xr-x   3 root     root            0 Sep  8 10:56 .
-rw-r--r--   1 root     root            2 Sep  8 10:56 early_cpio
drwxr-xr-x   3 root     root            0 Sep  8 10:56 kernel
drwxr-xr-x   3 root     root            0 Sep  8 10:56 kernel/x86
drwxr-xr-x   2 root     root            0 Sep  8 10:56 kernel/x86/microcode
-rw-r--r--   1 root     root        63488 Sep  8 10:56 kernel/x86/microcode/GenuineIntel.bin
========================================================================

$ file /boot/initramfs-$(uname -r).img
/boot/initramfs-4.14.138-89.102.amzn1.x86_64.img: ASCII cpio archive (SVR4 with no CRC)

$ rpm -ql dracut | grep skipcpio
$ echo $?
1

$ cpio -i < /boot/initramfs-$(uname -r).img
126 blocks

$ dd if=/boot/initramfs-$(uname -r).img of=initrd.img bs=512 skip=126
31523+1 records in
31523+1 records out
16139978 bytes (16 MB) copied, 0.0853499 s, 189 MB/s

$ lsinitrd initrd.img | grep -i nvme
drwxr-xr-x   3 root     root            0 Sep  8 10:56 lib/modules/4.14.138-89.102.amzn1.x86_64/kernel/drivers/nvme
drwxr-xr-x   2 root     root            0 Sep  8 10:56 lib/modules/4.14.138-89.102.amzn1.x86_64/kernel/drivers/nvme/host
-rwxr--r--   1 root     root       109208 Sep  8 10:56 lib/modules/4.14.138-89.102.amzn1.x86_64/kernel/drivers/nvme/host/nvme-core.ko
-rwxr--r--   1 root     root        76248 Sep  8 10:56 lib/modules/4.14.138-89.102.amzn1.x86_64/kernel/drivers/nvme/host/nvme.ko

This probably could happen due to lsinitrd which does NOT utilize skipcpio delivered with dracut rpm.

(Another instance in which c5_m5_checks_script.sh works as expected)

$ rpm -ql dracut | grep skipcpio
/usr/lib/dracut/skipcpio
$ grep skipcpio $(which lsinitrd)
            SKIP="$dracutbasedir/skipcpio"
skipcpio()
    CAT=skipcpio

SES Reports - Frequency has only one value allowed

The CloudFormation stack declares an input parameter Frequency, but it has only one value allowed and hence, cannot be changed.

  Frequency:
    Type: String
    Default: "cron(00 23 * * ? *)"
    AllowedValues: 
      - "cron(00 23 * * ? *)"
    Description: "cron(00 23 * * ? *) = Once a day at 23h00" 

Expected result: The input parameter has a default value of cron(00 23 * * ? *), but the default can be overridden to some other value.

Go support?

Is there any chance of getting a go library for the decode and verify Amazon Cognito JWT tokens?

Cognito ID Token signature verification failed

Hi,

I am trying to verify the JWT signature using decode-verify-jwt.js. However, I got a verification failed error regardless of the expiration. I deep dived into the error and found it constructed an empty key store.

my token is:

eyJraWQiOiJrYnhFYm9GalRyRjlOVWFjbzFVenNZOUxkUEVYZ2hlaTJ2aVlwQ0NhQUxVPSIsImFsZyI6IlJTMjU2In0.eyJhdF9oYXNoIjoiQmZQR1l6ZXROYkNKWGFlSVNTX2lpdyIsInN1YiI6IjlhMDk5ZDI5LTdhNDktNGYxNi1hZTY1LWFlNTkzY2YxNTljMSIsImF1ZCI6ImRuMGY1OWh1ZGF1cDdsODVvMHVxNm5yaDIiLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwidG9rZW5fdXNlIjoiaWQiLCJhdXRoX3RpbWUiOjE1NDk4MTExMjMsImlzcyI6Imh0dHBzOlwvXC9jb2duaXRvLWlkcC51cy13ZXN0LTIuYW1hem9uYXdzLmNvbVwvdXMtd2VzdC0yX3IwazVyZkJhNSIsImNvZ25pdG86dXNlcm5hbWUiOiI5YTA5OWQyOS03YTQ5LTRmMTYtYWU2NS1hZTU5M2NmMTU5YzEiLCJleHAiOjE1NDk4MTQ3MjMsImlhdCI6MTU0OTgxMTEyMywiZW1haWwiOiIyMTQ3MDYyNTdAcXEuY29tIn0.eH4QFt_CmnGfDP8YQBnIm-rhcZHuPpLYvCpXxrpNbtNrVOF730n2al8Hh9YAMNlRohJxFP661nH-eVsDatAABaMF3GpA4VDDQivLvgbx5qjF1LhINPz2vU10COKqRL6KmIzrYQ-TQVzUSu-0-oaTr4PTIQKeOyyZ2mTzz-y_f5Z5LyespANO4Q85hMuQt0Gm_wn5Khl0mWYWwiYvxYzoFuw-15J3wmFMiD4ZO7oHNuNByNbohkeVEbc2WP_vNM3NQ3yhHitoMUbm30ZaQrk_hvWogzjl_F3M8yKBFYBzd8uU9esU9xXuhjgQ7H63_jF0mQQWgqIajE5v6_AA0BbtAQ

this is my key:

{
    "keys": [
        {
            "alg": "RS256",
            "e": "AQAB",
            "kid": "kbxEboFjTrF9NUaco1UzsY9LdPEXghei2viYpCCaALU=",
            "kty": "RSA",
            "n": "ob0a3vAuDKPRQOt099SlxOIgWYd29dq-ZPpQNYKvCdjrGo5lepgtmBb9YC4-sw6753Ld-razabl83V7un1BjLFUvcmSXMP5nrhUeSRglvormsYTOdfYpBwWPFwI0GUrENNjAUA6DB0eDti3ieROdq9fgDujoBzqmJJjq7lVJ1U1HccLJzHceCKtjYjpSzveS0K5Adc9nl-7QkyuYzcdJ831T2VEIfke5RbpBnr9W7EJ2HWBWo8Xymdna4T1v4l96WM68hqM1_hjsb_ktBSqkFCuEMq_jDcss8Sn8yaHgQ97UhEW9Z3Eqo88ViFjsI5vOqLmHyIeD7GZKFbGdCvNI2w",
            "use": "sig"
        },
        {
            "alg": "RS256",
            "e": "AQAB",
            "kid": "e5DLnBIwe2L2xW+4t1+2uzrMDhAv0JbiZxF6mTZx+So=",
            "kty": "RSA",
            "n": "qEpcFZardygddQVxJ0mQDUgbhjaNzQzHgTFlqAxjfmRPAwfrB4Yp5ahyZI7f6ulKVFZt4vs5BollEXM_NQGKRT3TqbxWHDkdoUZttN00yMkWNvzEZJOYQ6iT_M8NvniWcOnhtkaDXKvE8rLnQJYj4EczPQ9gD7c7OSQQBj5iO2Zs4Ecp8pqfYvOFU3dkV0DnSS0TM2sN4rEuszQ_Aj2rSD79wq-GJ23bx4VDZ6lhZQuCY6h3clOufRuXkUZNQOrd90mEsSPxKtRJEqJyFBgBEUeDuqXpUiNpgA9MfNzyGV6CdSmKBUsw5zgq7efS96aMcZuAMXB513iL-nGl1l0FIQ",
            "use": "sig"
        }
    ]
}

I also tried using the code mentioned in AWS Forum, I got the same error.
https://forums.aws.amazon.com/thread.jspa?messageID=849348&tstart=0

Best Regards,
Joe

c5_m5_checks_script.sh incorrectly reports "fstab file contains device names" when /dev/sdb is a symlink to /dev/nvme1n1

$ cat /etc/fstab
#
LABEL=/     /           ext4    defaults,noatime  1   1
tmpfs       /dev/shm    tmpfs   defaults        0   0
devpts      /dev/pts    devpts  gid=5,mode=620  0   0
sysfs       /sys        sysfs   defaults        0   0
proc        /proc       proc    defaults        0   0
/dev/sdb /media/ebs0 ext4 noatime,nofail 0 2
$ ls -l /dev/sdb
lrwxrwxrwx 1 root root 7 Jul 16 16:56 /dev/sdb -> nvme1n1
$ sudo bash c5_m5_checks_script.sh
------------------------------------------------

OK     NVMe Module is installed and available on your instance


OK     ENA Module with version 1.4.0U is installed and available on your instance


ERROR  Your fstab file contains device names. Mount the partitions using UUID's before changing an instance type to M5/C5.

Press y to replace device names with UUID in your fstab file? (y/n) n
Aborting: Not saving changes...
Printing correct fstab file below:
#
LABEL=/     /           ext4    defaults,noatime  1   1
tmpfs       /dev/shm    tmpfs   defaults        0   0
devpts      /dev/pts    devpts  gid=5,mode=620  0   0
sysfs       /sys        sysfs   defaults        0   0
proc        /proc       proc    defaults        0   0
/dev/sdb /media/ebs0 ext4 noatime,nofail 0 2

------------------------------------------------

Note, despite the error, there were no actual suggested changes to /etc/fstab. Also, /dev/sdb is a symlink, just as the file in /dev/disk/by-uuid/ is, so I don't think there's any issue with our fstab. We also have not had any issue with the disk not mounting after a reboot.

$ ls -l /dev/disk/by-uuid/0bd27688-f5d2-4fa6-89f4-801114884537
lrwxrwxrwx 1 root root 13 Jul 16 16:56 /dev/disk/by-uuid/0bd27688-f5d2-4fa6-89f4-801114884537 -> ../../nvme1n1

Timeout when used on Lambda, fine locally though

When I call my lambda function with postman I just get an "Internal server error" with a 3 second timeout on cloudwatch (Task timed out after 3.00 seconds). There's only 2 keys it needs to retrieve from that url and it loads pretty much instantly on my desktop. Are there any specific roles I need to run it as, or permissions I need to enable anywhere?

Unable to assign Private IP with assign_private_ip.py script with EMR version 5.30.0

I am unable to assign Private IP address with assign_private_ip.py script with EMR version 5.30.0, I am running this script as bootstrap action.
This is the error I am getting:

iptables v1.8.2 (legacy): option "--to-destination" requires an argument
Try `iptables -h' or 'iptables --help' for more information.
Traceback (most recent call last):
  File "/emr/instance-controller/lib/bootstrap-actions/4/assign_private_ip.py", line 38, in <module>
    subprocess.check_call(['sudo iptables -t nat -A PREROUTING -d %s -j DNAT --to-destination %s' % (private_ip, primary_ip)], shell=True)
  File "/usr/lib64/python2.7/subprocess.py", line 190, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo iptables -t nat -A PREROUTING -d 172.31.110.200 -j DNAT --to-destination ']' returned non-zero exit status 2

On debugging further I found that this command in code is returning an empty string:

#Configure iptablles rules such that traffic is redirected from the secondary to the primary IP address:
    primary_ip = subprocess.check_output(['/sbin/ifconfig eth0 | grep \'inet addr:\' | cut -d: -f2 | awk \'{ print $1}\''], shell=True).strip()

Please help me resolve this.

c5_m5_checks_script.sh falsely complaining about NVMe on Ubuntu 18.04 (Bionic)

The -aws kernel in Ubuntu 18.04 is no longer compiling NVMe support as a separate kernel module, so c5_m5_checks_script.sh now provides inaccurate advice.

$ sudo ./c5_m5_checks_script.sh
-e ------------------------------------------------
ERROR  NVMe Module is not available on your instance.
	- Please install NVMe module before changing your instance type to M5/C5. Look at the following link for further guidance:
-e 	> https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nvme-ebs-volumes.html
-e

OK     ENA Module with version 2.0.3K is installed and available on your instance
-e

OK     fstab file looks fine and does not contain any device names.
-e
------------------------------------------------

$ uname -r
4.15.0-1039-aws

$ dmesg | grep -i nvme
[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-4.15.0-1039-aws root=UUID=bbf64c6d-bc15-4ae0-aa4c-608fd9820d95 ro console=tty1 console=ttyS0 nvme.io_timeout=4294967295
[    0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.15.0-1039-aws root=UUID=bbf64c6d-bc15-4ae0-aa4c-608fd9820d95 ro console=tty1 console=ttyS0 nvme.io_timeout=4294967295
[    0.931705] nvme nvme0: pci function 0000:00:04.0
[    2.260142]  nvme0n1: p1
[    7.315067] EXT4-fs (nvme0n1p1): mounted filesystem with ordered data mode. Opts: (null)
[    7.910372] EXT4-fs (nvme0n1p1): re-mounted. Opts: discard

$ sudo lsmod | grep -i nvme

$ grep -i NVME /boot/config-4.15.0-1039-aws
# NVME Support
CONFIG_NVME_CORE=y
...

Cognito decode-verify-jwt.js works but does not return email in claims

I have a user pool that uses email to sign in. decode-verify-jwt works great for verifying the jwt token when I pass it to my server API, but it does not include the email. I need a way to verify the user's email in that API call. Is that information not stored in the jwt token?

The username returned in the claims is the user attribute sub value in guid format.

Basically I need a secure way to confirm the user's email server side.

mysql_to_redshift_tablename_noquotes.py fails if COMMENT string has parentheses

For the following string in the CREATE TABLE definition, the python code fails:

"currency" char(3) DEFAULT NULL COMMENT 'Currency (ISO 4217)',

On the first split, extra takes the value "DEFAULT NULL COMMENT 'Currency (ISO 4217)'"

Because extra contains a ")" the second split runs:
sql_type, extra = definition.strip().split(" ", 1)

But because this contains more than one ")", it results in Value Error: too many values to unpack, and it is handled by the exception handling code that results in sql_type taking the value of "char(3) default null comment 'currency (iso 4217)" which is then incorrectly parsed further down the code.

A solution is to remove the COMMENT part right from the start. Instead of removing it from extra, remove it from definition:

  # adding code to identify COMMENT portion in column definition and omitting the COMMENT                                                                                                                                                                                                                       
  # since Redshift does not support COMMENT in CREATE TABLE statement                                                                                                                                                                                                                                           
                commentindefinition = "COMMENT" in definition
                if commentindefinition:
                    definition = definition.strip().split("COMMENT")[0].strip()

AWS Tools 3.15.33 fails to install on VS2015 Windows 10

I have both Visual Studio 2015 and Visual Studio 2017. I have AWS tools installed for each.

2017 works at the moment. But 2015 started telling me that AWS tools was not installed properly. So I uninstalled and tried to install again.

Now I get

"AWS Tools for Windows Setup Wizard ended prematurely because of an error. Your system has not been modified. To install this program at a later time, run Setup Wizard again."

Now I am stuck! How to debug this?

Fetch userData on EC2 machine boot up issue

I have a Lambda which triggers the EC2 machine to start using the amiId. When the lambda executes it also sends some userData to the EC2 machine instance which will be started.

Now, when i try to fetch the userData in the nodeJs code that will be executed on the EC2, it returs status 200 but the response body is empty.

Im using the api "http://169.254.169.254/latest/user-data" with superagent npm module to make the request. Below is the code for request:

`request.get(config.awsUserDataUrl)
.then((response) => {
const schema = joi.object().keys({
bucketname: joi.string().min(3).max(30).required(),
// alphanumeric string of 24 characters in length
jobid: joi.string().regex(/^[0-9a-fA-F]{24}$/),
appid: joi.string().min(3).max(30).required(),
snstopic: joi.string().min(3).max(60).required(),
filepath: joi.string().min(3).max(60).required(),
});
logger.info('response', JSON.stringify(response.body));
logger.info('response.body', response.body);
logger.info('response.status', response.status);
const result = joi.validate(response.body, schema);
if (result.error === null) {
logger.info('result', result);
resolve(response.body);
} else {
logger.info('result.error', result.error);
reject(result.error);
}
})
.catch((err) => {
logger.info('err.message', err.message);
logger.info('err.response', err.response);

        // err.message, err.response
        reject(err);
    });

`
Note: I have not assigned any IAM role for EC2 in Lambda environment variable. Can the issue of not getting the response userData in EC2 be due to Role not assigned to EC2?

SESMailer: connection pool is full

Hi,

I followed the docs @ https://aws.amazon.com/premiumsupport/knowledge-center/mass-email-ses-lambda/

When I run this suing a csv file with 22 rows, I had the following log warning 4 times:
"Connection pool is full, discarding connection: email.us-west-2.amazonaws.com"

Does connection discarded = no email sent?

Also, when trying with 10,000 rows, I receive the above multiple times and eventually I run into the following:

can't start new thread: error
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 137, in lambda_handler
raise e
error: can't start new thread

And sending is aborted

OS Release file

find_distro=`cat /etc/os-release |sed -n 's|^ID="\([a-z]\{4\}\).*|\1|p'` # Check if instance is using amazon AMI.

the /etc/os-release file doesn't exist on RHEL, so to avoid getting an confusing message as follows.

cat: /etc/os-release: No such file or directory

I'd suggest you could quiet the error output using cat /etc/os-release 2>/dev/null ...

claims.aud is undefined, so get error: Token was not issued for this audience

I added dump of claims and see this:

{ sub: 'f02e777c-489d-43c5-806d-2e6c06c9a355',
event_id: '7a5f716c-7c3a-4255-8b79-b11880ba9d9b',
token_use: 'access',
scope: 'aws.cognito.signin.user.admin',
auth_time: 1566219450,
iss:
'https://cognito-idp.us-west-2.amazonaws.com/us-west-2_xxxxxxxxxx',
exp: 1566223050,
iat: 1566219450,
jti: '752b158e-b066-4351-a786-e7bb51838f2b',
client_id: '5bsjlins3936h9gvpi0c24ah5b',
username: 'Test2' } 

Since you have this if statement:

   if claims['aud'] != app_client_id:
        print('Token was not issued for this audience')
        return False

I'm always failing with the "Token was not issued for this audience" error.

SES - Report - The runtime parameter of nodejs4.3 is no longer supported

When deploying the version of nodejs4.3 is not longer supported

The runtime parameter of nodejs4.3 is no longer supported for creating or updating AWS Lambda functions. We recommend you use the new runtime (nodejs8.10) while creating or updating functions. (Service: AWSLambdaInternal; Status Code: 400; Error Code: InvalidParameterValueException; Request ID: a16f0820-02c2-11e9-89ad-e79c9bbbaebc)

decode-verify-jwt.ts does not validate audience or client Id

In the readme it states "To verify the signature of an Amazon Cognito JWT ... Be sure to also verify that ... The audience ("aud") specified in the payload matches the app client ID created in the Amazon Cognito user pool."

This is done in the python example here: https://github.com/awslabs/aws-support-tools/blob/master/Cognito/decode-verify-jwt/decode-verify-jwt.py#L63

In the typescript version, we return the value but do not verify it: https://github.com/awslabs/aws-support-tools/blob/master/Cognito/decode-verify-jwt/decode-verify-jwt.ts#L103

Is this intentional? Can some details about this be shared? Thank you.

MySQL Connection string parser requires colon

RDSHost=`echo $RDSJdbc | awk -F: '{print $3}' | sed 's/\///g'

With the above a connecting string like

jdbc:mysql://abc123.eu-west-1.rds.amazonaws.com/mydb

gets parsed to

abc123.eu-west-1.rds.amazonaws.commydb

adding a port is a work around but this is not made explicit in any documentation

jdbc:mysql://abc123.eu-west-1.rds.amazonaws.com:3306/mydb

RDSHost=`echo $RDSJdbc | awk -F: '{print $3}' | sed 's/\///g'`

c5_m5_checks_script.sh - modifies fstab when it shouldn't

Line 74:

        sed -i "s|^/dev/${LINE}|UUID=${UUID}|" /etc/fstab               # Changes the entry in fstab to UUID form

Modifies /etc/fstab in place instead of making changes in a temp copy.

Line 80-106

        echo -e "\n\nERROR  Your fstab file contains device names. Mount the partitions using UUID's before changing an instance type to M5/C5."                                                         # Outputs the new fstab file
        printf "\nPress y to replace device names with UUID in your fstab file? (y/n) "
        read RESPONSE;
        case "$RESPONSE" in
            [yY]|[yY][eE][sS])                                              # If answer is yes, keep the changes to /etc/fstab
                    echo "Writing changes to /etc/fstab..."
                    echo -e "\n\n***********************"
                    cat /etc/fstab
                    echo -e "***********************"
                    echo -e "\nOriginal fstab file is stored as /etc/fstab.backup.$time_stamp"
                    ;;
            [nN]|[nN][oO]|"")                                               # If answer is no, or if the user just pressed Enter
                    echo -e "Aborting: Not saving changes...\nPrinting correct fstab file below:"                  # don't save the new fstab file
                    cat /etc/fstab
                    cp /etc/fstab.backup.$time_stamp /etc/fstab
                    rm /etc/fstab.backup.$time_stamp
                    ;;
            *)                                                              # If answer is anything else, exit and don't save changes
                    echo "Invalid Response"                                 # to fstab
                    echo "Exiting"
                    cp /etc/fstab.backup.$time_stamp /etc/fstab
                    rm /etc/fstab.backup.$time_stamp
                    exit 1
                    echo -e "------------------------------------------------"
                    ;;
    
        esac

Prompts the user to accept changes, however if user kills script while at line 82, /etc/fstab will be modified and not rolled back. Script output is explicitly wrong and misleading - e.g. line 85 implies the /etc/fstab modification is taking place at that point, when in fact it's already been done w/o user consent.

Script should make changes in a temp file and only modify /etc/fstab when explicitly accepted by user.

token header part can't transform to utf8

there occured a error while I try to transform base64 to utf8 with:
Buffer.from(tokenSections[0], 'base64').toString('utf8');

窫늷똦䊖ꈣ蘦垗撦鍔ٕꏇ獇숵ᛶ猓䝳著뒖꒔랦蔅읆卦ጵ⋒⛂옖⍲▢㌥匣❢ 2020-12-24T10:00:30.458Z bf03fc05-77d2-4d46-ac96-7a5017e07f12 INFO 窫늷똦䊖ꈣ蘦垗撦鍔ٕꏇ獇숵ᛶ猓䝳著뒖꒔랦蔅읆卦ጵ⋒⛂옖⍲▢㌥匣❢

error with python 2.7

{
"stackTrace": [
[
"/var/task/lambda_function.py",
127,
"lambda_handler",
"raise e"
]
],
"errorType": "KeyError",
"errorMessage": "'Records'"
}

ENA driver version check should be consistent with recommendation from docs

The script currently produces an output like:

"OK     ENA Module with version 1.1.2 is installed and available on your instance"

However, on the docs page, we see the mention that the minimum recommended version for the driver is actually 1.5.0g: https://github.com/awsdocs/amazon-ec2-user-guide/blob/master/doc_source/enhanced-networking-ena.md#testing-whether-enhanced-networking-is-enabled
I think we should update the script to make it consistent with the docs. Let me know if a PR would be desired.

Summary

In the older version, the report showed a summary of bounces and complaints at the top and it was very convenient to quickly see the daily stats.
Any chance to bring that back?
Thanks,
Martin

SES Reports - filenames are not valid in windows

Colon is not a valid character for a filename in Windows. So while these files are saved to S3 and working great there, I wouldn't be able to download the to a windows 10 system without some form of name conversion. It would be better if they were named something more like this:

YYYYMMDDThhmmss.html (i.e. 20170308T135352.html)

Thanks!

Tiny improvement for S3 transfer acceleration script

According to bash best practice and some real life experience
error handle should be more properly to avoid some possible gotchas.
Though its a very small script but I feel it would be a tiny improvement suggestion to error handle.

And also one indentation might need to be corrected

Here is PR #123

Cache Cognito user pool public key

In the implementation of jwt verification - https://github.com/awslabs/aws-support-tools/blob/master/Cognito/decode-verify-jwt/decode-verify-jwt.py. On line number 21 - 27 and 32 - 42, the code checks for Congito public key. If the key will remain the same after the creation of Cognito pool till it's destroyed, can I store the public key in environment variables?
By avoiding public key fetch on each request, it helps to avoid an extra API request.

Code snippet

region = 'ap-southeast-2'
userpool_id = 'ap-southeast-2_xxxxxxxxx'
app_client_id = '<ENTER APP CLIENT ID HERE>'
keys_url = 'https://cognito-idp.{}.amazonaws.com/{}/.well-known/jwks.json'.format(region, userpool_id)
# instead of re-downloading the public keys every time
# we download them only on cold start
# https://aws.amazon.com/blogs/compute/container-reuse-in-lambda/
with urllib.request.urlopen(keys_url) as f:
  response = f.read()
keys = json.loads(response.decode('utf-8'))['keys']

def lambda_handler(event, context):
    token = event['token']
    # get the kid from the headers prior to verification
    headers = jwt.get_unverified_headers(token)
    kid = headers['kid']
    # search for the kid in the downloaded public keys
    key_index = -1
    for i in range(len(keys)):
        if kid == keys[i]['kid']:
            key_index = i
            break
    if key_index == -1:
        print('Public key not found in jwks.json')
        return False

SESMailer:- incorrect header check Error

Can anyone please help me to resolve this error.

Error -3 while decompressing data: incorrect header check Aborting...
Error -3 while decompressing data: incorrect header check: error
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 136, in lambda_handler
raise e
error: Error -3 while decompressing data: incorrect header check

Question on URL to get private key and if Lambda required

Part 1:
Relating to: "decode-verify-jwt:.
Does this code only run within Lambda for some security reason?

Should I be able to go to the following URL and download or view the token?
var keys_url = 'https://cognito-idp.' + region + '.amazonaws.com/' + userpool_id + '/.well-known/jwks.json';

From an EC2 instance, I tried that url in a browser and I get "We can’t connect to the server at cognito-idp.amazonaws.com." Is it publicly available? I'm not supposed to create my own "Amazong Cognito Domain" (prefixed domain) and use that?

Hour later - looks like I resolved this: I guess I was not including the region in the URL.
Seems to work in browser now too.

Part 2 - How to test in Lambda? No sample how to call it from Node-JS?

I set up a test event within Lambda as follows:

{
  "event": 
     {"token": "copied my whole token here that I got from a console.log when I first got the token"},
}

I've been away from Lambda for about a year, finally tried this and got the token passed in:

{
     "token": "copied my whole token here that I got from a console.log when I first got the token"
}

It was dying in the token.split, so I added this:

    console.log("====== start log");
    console.dir(event, { depth: null });
    var token = event.token;
    console.log ("token=" + token);

I also added a console.log as follows:

    https.get(keys_url, function(response) {
          console.log("response.statusCode=" + response.statusCode +  "keys_url:" + keys_url); 
           if (response.statusCode == 200) {

Thanks,
Neal

ses_mailer.py can cause thread issues in high volume sends

When attempting to send over 1000 emails using the SESMailer, you'll run into this error. This has been noted in issue #37

can't start new thread: error
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 132, in lambda_handler
raise e
error: can't start new thread

This is due to the ThreadPoolExecutor not being used with a with statement as per the python docs suggest ( python docs for ThreadPoolExecutor. )

JWT token verify - confirms token sections is not less than 2 - shouldn't that be 3 (sections)?

The node/ typescript version of decode-verify throws an exception if the number of dot-separated token sections is less than 2.
Shouldn't that be 3, or have I misunderstood the advice? 🤔 Reference the AWS page which links to this code example,

Step 1: Confirm the Structure of the JWT
A JSON Web Token (JWT) includes three sections: ... If your JWT does not conform to this structure, consider it invalid and do not accept it.

Delivery support

Feature request to allow reporting as well on delivery success messages.

assign_private_ip.py doesn't work on EMR 5.30.0

aws-support-tools/EMR/Assign_Private_IP/assign_private_ip.py is not working on EMR 5.30.0

I found that line 35:

primary_ip =  subprocess.check_output(['/sbin/ifconfig eth0 | grep \'inet addr:\' | cut -d: -f2 | awk \'{ print $1}\''], shell=True).strip()

is not working because output of ifconfig has no value of 'inet addr:' now. ifconfig's output is like below

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9001
        inet 172.31.49.132  netmask 255.255.240.0  broadcast 172.31.63.255
        inet6 fe80::898:afff:feb0:b86e  prefixlen 64  scopeid 0x20<link>
        ether 0a:98:af:b0:b8:6e  txqueuelen 1000  (Ethernet)
        RX packets 1613746  bytes 2108266712 (1.9 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 409253  bytes 34965408 (33.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 77045  bytes 23041269 (21.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 77045  bytes 23041269 (21.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

I changed the code little bit like this

primary_ip = subprocess.check_output(['/sbin/ifconfig eth0 | grep \'inet\' '], shell=True).strip().split()[1]

It worked ok.

invalid syntax (jose.py, line 546)

I want to use same functionality from aws lambda. Code works on local machine but from cloud9 or lambda I get following error:
Syntax error in module 'customauthorizer3/lambda_function': invalid syntax (jose.py, line 546)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.