Comments (16)
hi @mkadali99, Can you show us the top few lines of your CSV file and the command and arguments that you used to run retro_tag.rb?
from auto-tag.
Sure, I can. I deployed the CF stack that was provided and that stack created the lambda function. For that lambda, I have given S3 as the input and configured such that if any .csv file is uploaded to that particular bucket, it has to trigger lambda. But when I uploaded the file, the lambda isn't triggered.
This error was returned in the AWS lambda console, when I tried to test my lambda function
Here are the lines from csv
eventTime eventSource eventName awsRegion userIdentity.accountId recipientAccountId key requestParameters responseElements
2019-01-01T02:43:04Z ec2.amazonaws.com RunInstances us-east-1 6.9563E+11 6.9563E+11 s3://sxxxxxxxxxx1/AWSLogs/xxxxxxxxxx/CloudTrail/us-east-1/2019/01/01/xxxxxxxxx_CloudTrail_us-east-1_201901T0245Z_FThT02P.json.gz {"instancesSet":{"items":[{"imageId":"ami-xxxxxxx","minCount":1,"maxCount":1,"keyName":"FFHI-TAG"}]},"userData":"","instanceType":"t2.medium","blockDeviceMapping":{"items":[{"deviceName":"/dev/xvda","ebs":{"volumeSize":20,"deleteOnTermination":true,"volumeType":"gp2"}}]},"availabilityZone":"us-east-1b","monitoring":{"enabled":true},"disableApiTermination":false,"clientToken":"b275ad-2bfc-346d-114e-33a9dfa36_subnet-90ec2ecd_1","networkInterfaceSet":{"items":[{"deviceIndex":0,"subnetId":"subnet-90ec2ecd","associatePublicIpAddress":true,"groupSet":{"items":[{"groupId":"sg-054c62eae80a3d7"}]}}]},"iamInstanceProfile":{"name":"inxxxxxxxx-worker-nodes-NodestnxxxxxxxxceProfile-16MV9J0ZI37PS"}} {"requestId":"4fe2-6013-404a-a608-cd8cc75","reservationId":"r-06782dc58b61","ownerId":"xxxxxx","groupSet":{},"instancesSet":{"items":[{"instanceId":"i-025fxxxxxx8506","imageId":"ami-0b4eb1dxxxxfcxxxxea","instanceState":{"code":0,"name":"pending"},"privateDnsName":"ip-10-3xxxxx135.ec2.internal","keyName":"FFHI-xxG","amiLaunchIndex":0,"productCodes":{},"instanceType":"t2.medium","launchTime":1546310584000,"placement":{"availabilityZone":"us-east-1b","tenancy":"default"},"monitoring":{"state":"pending"},"subnetId":"subnet-90ec2ecd","vpcId":"vpc-94xxxxxxc","privateIpAddress":"10.35.xxxxxxxx35","stateReason":{"code":"pending","message":"pending"},"architecture":"x86_64","rootDeviceType":"ebs","rootDeviceName":"/dev/xvda","blockDeviceMapping":{},"virtualizationType":"hvm","hypervisor":"xen","clientToken":"b275axxfc-346dxx-1xxxxxe-33a9dc65fa36_subnet-90exxxxxxx1","groupSet":{"items":[{"groupId":"sg-054c62xxxxxxxx10a3d7","groupName":"ixxxxxxnes-wxxxxxxxx-node-Noecurixxxxxxoup-BTxxxxxKUD4J"}]},"sourceDestCheck":true,"networkInterfaceSet":{"items":[{"networkInterfaceId":"eni-0107c257xxxxxxd9c4","subnetId":"subnet-90ec2ecd","vpcId":"vpc-94e06bec","ownerId":"6zxxxxxxxxxxxxx","status":"in-use","macAddress":"0e:7b:3xxxxxxxxxxx4","privateIpAddress":"10.35.xxxxxxxxxx"privateDnsName":"ip-10-xxxxxxxxxxxxx ec2.internal","sourceDestCheck":true,"interfaceType":"interface","groupSet":{"items":[{"groupId":"sgxxxxxxxxxxxxxxx0a3d7","groupName":"ixxxxxxxxxxxxxxxxxoup-BTKxxxxxxxxx"}]},"attachment":{"attachmentId":"eni-attach-002xxxxxxaxxxx,"deviceIndex":0,"status":"attaching","attachTime":1546310584000,"deleteOnTermination":true},"privateIpAddressesSet":{"item":[{"privateIpAddress":"10.xxxxxxx135","privateDnsName":"xxxxxxxxxxxxxec2.internal","primary":true}]},"ipv6AddressesSet":{},"tagSet":{}}]},"iamInstanceProfile":{"arn":"arn:aws:iam::69xxxxxxxxxxxx:instance-profile/is-woxxxxxxlxxxxxxxxxxe-16MV0xxxxxx37PS","id":"xxxxxxxxxxxxxxxxxxxxxS2"},"ebsOptimized":false,"cpuOptions":{"coreCount":2,"threadsPerCore":1}}]},"requesterId":"9xxxxxxxxxxxxx76"}
from auto-tag.
hmm, did you copy/paste that out of excel or something? I don't see the commas...
from auto-tag.
it should look more like this...
"eventTime","eventSource","eventName","awsRegion","userIdentity.accountId","recipientAccountId","key","requestParameters","responseElements"
"2017-07-22T00:00:58Z","ec2.amazonaws.com","RunInstances","us-east-1","4755389xxxx","4755389xxxxx","s3://xxx-cloudtrail/dev/AWSLogs/475538xxxxx/CloudTrail/us-east-1/2017/07/22/475538xxxxx_CloudTrail_us-east-1_20170722T\ 0005Z_y3IkcoREAxMpipr1.json.gz","{""instancesSet"":{""items"":[{""imageId"":""ami-3a5xxxxx"",""minCount"":1,""maxCount"":1,""keyName"":""gxxx""}]}
the double-quotes in the "responseElement" JSON should all be double double-quotes.
from auto-tag.
Yeah, I copied it from the excel.
Am I doing it the right way? I am giving directly CSV as the input from S3.
The one that you pasted above, where did you copy that one from?
from auto-tag.
ok, can you show me how you are running the command?
from auto-tag.
I opened the csv in a text editor.
from auto-tag.
I have got a new error when I tried to run the code from an ec2 instance.
[root@ip-10-35-xxx-xxx ec2-user]# ./retro_tag.rb --csv=RetroTagging1.csv --bucket=securitycloudtraileast1
Importing from /home/ec2-user/RetroTagging1.csv (5.33 MiB)...Traceback (most recent call last):
./retro_tag.rb:76:in `
from auto-tag.
Ok I see. You'll need to download all of the code from the "retro_tagging" folder and then cd
to that folder and run: bundle install
and then try to run retro_tag.rb
again.
from auto-tag.
Here is another error
[root@ip-10-35-134-163 retro_tagging]# ./retro_tag.rb --csv=RetroTagging1.csv --bucket=securitycloudtraileast1 --scan-access-key-id=xxxxxxxxxxxxxxxxxxxxx --scan-secret-access-key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Importing from /home/ec2-user/retrotag/auto-tag/retro_tagging/RetroTagging1.csv (5.33 MiB)...completed in 1 seconds.
The cache file is too old, building a new cache file...
Collecting AutoScaling Groups from: ap-east-1
Traceback (most recent call last):
13: from ./retro_tag.rb:122:in <main>' 12: from ./retro_tag.rb:122:in
each'
11: from ./retro_tag.rb:122:in block in <main>' 10: from /home/ec2-user/retrotag/auto-tag/retro_tagging/aws_resource/default.rb:40:in
get_existing_resources'
9: from /home/ec2-user/retrotag/auto-tag/retro_tagging/aws_resource/default.rb:40:in each' 8: from /home/ec2-user/retrotag/auto-tag/retro_tagging/aws_resource/default.rb:44:in
block in get_existing_resources'
7: from /usr/local/rvm/gems/ruby-2.6.0/gems/aws-sdk-autoscaling-1.20.0/lib/aws-sdk-autoscaling/client.rb:1782:in describe_auto_scaling_groups' 6: from /usr/local/rvm/gems/ruby-2.6.0/gems/aws-sdk-core-3.48.6/lib/seahorse/client/request.rb:70:in
send_request'
5: from /usr/local/rvm/gems/ruby-2.6.0/gems/aws-sdk-core-3.48.6/lib/seahorse/client/plugins/response_target.rb:23:in call' 4: from /usr/local/rvm/gems/ruby-2.6.0/gems/aws-sdk-core-3.48.6/lib/aws-sdk-core/plugins/response_paging.rb:10:in
call'
3: from /usr/local/rvm/gems/ruby-2.6.0/gems/aws-sdk-core-3.48.6/lib/aws-sdk-core/plugins/param_converter.rb:24:in call' 2: from /usr/local/rvm/gems/ruby-2.6.0/gems/aws-sdk-core-3.48.6/lib/aws-sdk-core/plugins/idempotency_token.rb:17:in
call'
1: from /usr/local/rvm/gems/ruby-2.6.0/gems/aws-sdk-core-3.48.6/lib/aws-sdk-core/plugins/jsonvalue_converter.rb:20:in call' /usr/local/rvm/gems/ruby-2.6.0/gems/aws-sdk-core-3.48.6/lib/seahorse/client/plugins/raise_response_errors.rb:15:in
call': The security token included in the request is invalid. (Aws::AutoScaling::Errors::InvalidClientTokenId)
from auto-tag.
Closer...this just says that your security keys are not good.
The security token included in the request is invalid.
Are you sure that your key is good? Also, you should probably remove/alter the key from your comment.
from auto-tag.
Yeah, i am sure I am using the proper keys. Is there any chance we can choose which regions it scans. We are only allowed to query on 2 regions, us-east-1 and us-west-2. But this one starts with ap-east-1 and throwing this error. If we can get to choose and start with us-east-1 we may avoid it
from auto-tag.
Well at the moment is designed to scan all regions. I would think you would get a different error if that was the case.
Can you put your credentials into a profile and use that profile?
Then try to run...
(use spaces instead of equal signs between your arguments and values)
./retro_tag.rb --csv RetroTagging1.csv --bucket securitycloudtraileast1 --scan-profile myprofile
and also run
aws iam get-user --profile myprofile
which should return something like this
{
"User": {
"Path": "/",
"UserName": "cgw-xxxx-sysops-ray.xxxxx-2222222",
"UserId": "AIDATUN65GNT2222222",
"Arn": "arn:aws:iam::250045311111:user/cgw-xxxxx-sysops-ray.xxxxx-2222222",
"CreateDate": "2019-05-01T17:00:54Z"
}
}
from auto-tag.
Well, that error is no more now. Have another one. While I run the code from my instance using this command,
./retro_tag.rb --csv=example.csv --bucket=s3://cloudtrailbucket --scan-access-key-id=xxxxxxx --scan-secret-access-key=xxxxxxxxxx --bucket-region=us-east-1 --ignore-cache --lambda=RetroAutoTagging --lambda-region=us-east-1
This was returned.
Total CloudTrail Events: 472
Unique CloudTrail S3 Objects: 364
Starting 3 Lambda Function threads...
✔ 0 S3 objects left to be processed by the RetroAutoTagging Lambda Function... completed in 18 seconds
And when I look at the CloudWatch events for the lambda function, it says invalid bucket name.
2019-05-09T18:20:53.990Z 2bf8994a80f81 { InvalidBucketName: The specified bucket is not valid.
at Request.extractError (/var/runtime/node_modules/aws-sdk/lib/services/s3.js:577:35)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
at Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:683:14)
at Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request. (/var/runtime/node_modules/aws-sdk/lib/request.js:38:9)
at Request. (/var/runtime/node_modules/aws-sdk/lib/request.js:685:12)
at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
message: 'The specified bucket is not valid.',
code: 'InvalidBucketName',
region: null,
time: 2019-05-09T18:20:53.990Z,
from auto-tag.
cool! The s3://
prefix isn't necessary for the --bucket
argument.
from auto-tag.
It seems like this worked...
from auto-tag.
Related Issues (20)
- Support for SageMaker Notebook Instances
- Support for KMS and secrets manager
- Support for ECS and EKS
- Custom Tags - Ability to filter array
- Support for EFS
- Wanted: Repo Maintainer
- custom tags not working HOT 2
- Lambda runtime error HOT 12
- Deployed using deploy_autotag.sh no tagging is happening HOT 1
- S3 Bucket tagging fails with a NoSuchTagSet error HOT 6
- support EKS/ECS HOT 1
- How to not show default tag AutoTag_Creator on AWS resources HOT 1
- no assets available for 0.5.10 HOT 2
- deploy_autotag.sh should not error when run more than once HOT 1
- Auto tagging for Amazon Aurora clusters
- run time error in latest release; npm test failures HOT 4
- Auto tag for CreateBucket Failed HOT 2
- Support for new enabled region in account
- Lambda function tagging error
- Issue with updating Lambda Function with new node version
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from auto-tag.