Git Product home page Git Product logo

pipeline-aws-plugin's Introduction

Gitpod ready-to-code Build Status

Features

This plugins adds Jenkins pipeline steps to interact with the AWS API.

see the changelog for release information

Primary/Agent setups

This plugin is not optimized to setups with a primary and multiple agents. Only steps that touch the workspace are executed on the agents while the rest is executed on the master.

For the best experience make sure that primary and agents have the same IAM permission and networking capabilities.

Retrieve credentials from node

By default, credentials lookup is done on the master node for all steps. To enable credentials lookup on the current node, enable Retrieve credentials from node in Jenkins global configuration. This is globally applicable and restricts all access to the master's credentials.

Usage / Steps

withAWS

the withAWS step provides authorization for the nested steps. You can provide region and profile information or let Jenkins assume a role in another or the same AWS account. You can mix all parameters in one withAWS block.

Set region information (note that region and endpointUrl are mutually exclusive):

withAWS(region:'eu-west-1') {
    // do something
}

Use provided endpointUrl (endpointUrl is optional, however, region and endpointUrl are mutually exclusive):

withAWS(endpointUrl:'https://minio.mycompany.com',credentials:'nameOfSystemCredentials',federatedUserId:"${submitter}@${releaseVersion}") {
    // do something
}

Use Jenkins UsernamePassword credentials information (Username: AccessKeyId, Password: SecretAccessKey):

withAWS(credentials:'IDofSystemCredentials') {
    // do something
}

Use Jenkins AWS credentials information (AWS Access Key: AccessKeyId, AWS Secret Key: SecretAccessKey):

withAWS(credentials:'IDofAwsCredentials') {
    // do something
}

Use profile information from ~/.aws/config:

withAWS(profile:'myProfile') {
    // do something
}

Assume role information (account is optional - uses current account as default. externalId, roleSessionName and policy are optional. duration is optional - if specified it represents the maximum amount of time in seconds the session may persist for, defaults to 3600.):

withAWS(role:'admin', roleAccount:'123456789012', externalId: 'my-external-id', policy: '{"Version":"2012-10-17","Statement":[{"Sid":"Stmt1","Effect":"Deny","Action":"s3:DeleteObject","Resource":"*"}]}', duration: 3600, roleSessionName: 'my-custom-session-name') {
    // do something
}

Assume federated user id information (federatedUserId is optional - if specified it generates a set of temporary credentials and allows you to push a federated user id into cloud trail for auditing. duration is optional - if specified it represents the maximum amount of time in seconds the session may persist for, defaults to 3600.):

withAWS(region:'eu-central-1',credentials:'nameOfSystemCredentials',federatedUserId:"${submitter}@${releaseVersion}", duration: 3600) {
    // do something
}

Authentication with a SAML assertion (fetched from your company IdP) by assuming a role

withAWS(role: 'myRole', roleAccount: '123456789', principalArn: 'arn:aws:iam::123456789:saml-provider/test', samlAssertion: 'base64SAML', region:'eu-west-1') {
  // do something
}

Authentication by retrieving credentials from the node in scope

node('myNode') { // Credentials will be fetched from this node
  withAWS(role: 'myRole', roleAccount: '123456789', region:'eu-west-1', useNode: true) {
    // do something
  }
}

When you use Jenkins Declarative Pipelines you can also use withAWS in an options block:

options {
	withAWS(profile:'myProfile')
}
stages {
	...
}

awsIdentity

Print current AWS identity information to the log.

The step returns an objects with the following fields:

  • account - The AWS account ID number of the account that owns or contains the calling entity
  • user - The unique identifier of the calling entity
  • arn - The AWS ARN associated with the calling entity
def identity = awsIdentity()

cfInvalidate

Invalidate given paths in CloudFront distribution.

cfInvalidate(distribution:'someDistributionId', paths:['/*'])
cfInvalidate(distribution:'someDistributionId', paths:['/*'], waitForCompletion: true)

S3 Steps

All s3* steps take an optional pathStyleAccessEnabled and payloadSigningEnabled boolean parameter.

s3Upload(pathStyleAccessEnabled: true, payloadSigningEnabled: true, file:'file.txt', bucket:'my-bucket', path:'path/to/target/file.txt')
s3Copy(pathStyleAccessEnabled: true, fromBucket:'my-bucket', fromPath:'path/to/source/file.txt', toBucket:'other-bucket', toPath:'path/to/destination/file.txt')
s3Delete(pathStyleAccessEnabled: true, bucket:'my-bucket', path:'path/to/source/file.txt')
s3Download(pathStyleAccessEnabled: true, file:'file.txt', bucket:'my-bucket', path:'path/to/source/file.txt', force:true)
exists = s3DoesObjectExist(pathStyleAccessEnabled: true, bucket:'my-bucket', path:'path/to/source/file.txt')
files = s3FindFiles(pathStyleAccessEnabled: true, bucket:'my-bucket')

s3Upload

Upload a file/folder from the workspace (or a String) to an S3 bucket. If the file parameter denotes a directory, the complete directory including all subfolders will be uploaded.

s3Upload(file:'file.txt', bucket:'my-bucket', path:'path/to/target/file.txt')
s3Upload(file:'someFolder', bucket:'my-bucket', path:'path/to/targetFolder/')

Another way to use it is with include/exclude patterns which are applied in the specified subdirectory (workingDir). The option accepts a comma-separated list of patterns.

s3Upload(bucket:"my-bucket", path:'path/to/targetFolder/', includePathPattern:'**/*', workingDir:'dist', excludePathPattern:'**/*.svg,**/*.jpg')

Specific user metadatas can be added to uploaded files

s3Upload(bucket:"my-bucket", path:'path/to/targetFolder/', includePathPattern:'**/*.svg', workingDir:'dist', metadatas:['Key:SomeValue','Another:Value'])

Specific cachecontrol can be added to uploaded files

s3Upload(bucket:"my-bucket", path:'path/to/targetFolder/', includePathPattern:'**/*.svg', workingDir:'dist', cacheControl:'public,max-age=31536000')

Specific content encoding can be added to uploaded files

s3Upload(file:'file.txt', bucket:'my-bucket', contentEncoding: 'gzip')

Specific content type can be added to uploaded files

s3Upload(bucket:"my-bucket", path:'path/to/targetFolder/', includePathPattern:'**/*.ttf', workingDir:'dist', contentType:'application/x-font-ttf', contentDisposition:'attachment')

Canned ACLs can be added to upload requests.

s3Upload(file:'file.txt', bucket:'my-bucket', path:'path/to/target/file.txt', acl:'PublicRead')
s3Upload(file:'someFolder', bucket:'my-bucket', path:'path/to/targetFolder/', acl:'BucketOwnerFullControl')

A Server Side Encryption Algorithm can be added to upload requests.

s3Upload(file:'file.txt', bucket:'my-bucket', path:'path/to/target/file.txt', sseAlgorithm:'AES256')

A KMS alias or KMS id can be used to encrypt the uploaded file or directory at rest.

s3Upload(file: 'foo.txt', bucket: 'my-bucket', path: 'path/to/target/file.txt', kmsId: 'alias/foo')
s3Upload(file: 'foo.txt', bucket: 'my-bucket', path: 'path/to/target/file.txt', kmsId: '8e1d420d-bf94-4a15-a07a-8ad965abb30f')
s3upload(file: 'bar-dir', bucket: 'my-bucket', path: 'path/to/target', kmsId: 'alias/bar')

A redirect location can be added to uploaded files.

s3Upload(file: 'file.txt', bucket: 'my-bucket', redirectLocation: '/redirect')

Creating an S3 object by creating the file whose contents is the provided text argument.

s3Upload(path: 'file.txt', bucket: 'my-bucket', text: 'Some Text Content')
s3Upload(path: 'path/to/targetFolder/file.txt', bucket: 'my-bucket', text: 'Some Text Content')

Tags can be added to uploaded files.

s3Upload(file: 'file.txt', bucket: 'my-bucket', tags: '[tag1:value1, tag2:value2]')

def tags=[:]
tags["tag1"]="value1"
tags["tag2"]="value2"

s3Upload(file: 'file.txt', bucket: 'my-bucket', tags: tags.toString())

Log messages can be less verbose. Disable it when you feel the logs are excessive but you will lose the visibility of what files having been uploaded to S3.

s3Upload(path: 'source/path/', bucket: 'my-bucket', verbose: false)

s3Download

Download a file/folder from S3 to the local workspace. Set optional parameter force to true to overwrite existing file in workspace. If the path ends with a / the complete virtual directory will be downloaded.

s3Download(file:'file.txt', bucket:'my-bucket', path:'path/to/source/file.txt', force:true)
s3Download(file:'targetFolder/', bucket:'my-bucket', path:'path/to/sourceFolder/', force:true)

s3Copy

Copy file between S3 buckets.

s3Copy(fromBucket:'my-bucket', fromPath:'path/to/source/file.txt', toBucket:'other-bucket', toPath:'path/to/destination/file.txt')

s3Delete

Delete a file/folder from S3. If the path ends in a "/", then the path will be interpreted to be a folder, and all of its contents will be removed.

s3Delete(bucket:'my-bucket', path:'path/to/source/file.txt')
s3Delete(bucket:'my-bucket', path:'path/to/sourceFolder/')

s3DoesObjectExist

Check if object exists in S3 bucket.

exists = s3DoesObjectExist(bucket:'my-bucket', path:'path/to/source/file.txt')

s3FindFiles

This provides a way to query the files/folders in the S3 bucket, analogous to the findFiles step provided by "pipeline-utility-steps-plugin". If specified, the path limits the scope of the operation to that folder only. The glob parameter tells s3FindFiles what to look for. This can be a file name, a full path to a file, or a standard glob ("*", "*.ext", "path/**/file.ext", etc.).

If you do not specify path, then it will default to the root of the bucket. The path is assumed to be a folder; you do not need to end it with a "/", but it is okay if you do. The path property of the results will be relative to this value.

This works by enumerating every file/folder in the S3 bucket under path and then performing glob matching. When possible, you should use path to limit the search space for efficiency purposes.

If you do not specify glob, then it will default to "*".

By default, this will return both files and folders. To only return files, set the onlyFiles parameter to true.

files = s3FindFiles(bucket:'my-bucket')
files = s3FindFiles(bucket:'my-bucket', glob:'path/to/targetFolder/file.ext')
files = s3FindFiles(bucket:'my-bucket', path:'path/to/targetFolder/', glob:'file.ext')
files = s3FindFiles(bucket:'my-bucket', path:'path/to/targetFolder/', glob:'*.ext')
files = s3FindFiles(bucket:'my-bucket', path:'path/', glob:'**/file.ext')

s3FindFiles returns an array of FileWrapper objects exactly identical to those returned by findFiles.

Each FileWrapper object has the following properties:

  • name: the filename portion of the path (for "path/to/my/file.ext", this would be "file.ext")
  • path: the full path of the file, relative to the path specified (for path="path/to/", this property of the file "path/to/my/file.ext" would be "my/file.ext")
  • directory: true if this is a directory; false otherwise
  • length: the length of the file (this is always "0" for directories)
  • lastModified: the last modification timestamp, in milliseconds since the Unix epoch (this is always "0" for directories)

When used in a string context, a FileWrapper object returns the value of its path property.

s3PresignURL

Will presign the bucket/key and return a url. Defaults to 1 minute duration, using GET.

def url = s3PresignURL(bucket: 'mybucket', key: 'mykey')

The duration can be overridden:

def url = s3PresignURL(bucket: 'mybucket', key: 'mykey', durationInSeconds: 300) //5 minutes

The method can also be overridden:

def url = s3PresignURL(bucket: 'mybucket', key: 'mykey', httpMethod: 'POST')

cfnValidate

Validates the given CloudFormation template.

def response = cfnValidate(file:'template.yaml')
echo "template description: ${response.description}"

cfnUpdate

Create or update the given CloudFormation stack using the given template from the workspace. You can specify an optional list of parameters, either as a key/value pair or a map. You can also specify a list of keepParams of parameters which will use the previous value on stack updates.

Using timeoutInMinutes you can specify the amount of time that can pass before the stack status becomes CREATE_FAILED and the stack gets rolled back. Due to limitations in the AWS API, this only applies to stack creation.

If you have many parameters you can specify a paramsFile containing the parameters. The format is either a standard JSON file like with the cli or a YAML file for the cfn-params command line utility.

Additionally you can specify a list of tags that are set on the stack and all resources created by CloudFormation.

The step returns the outputs of the stack as a map. It also contains special values prefixed with jenkins:

  • jenkinsStackUpdateStatus - "true"/"false" whether the stack was modified or not

When cfnUpdate creates a stack and the creation fails, the stack is deleted instead of being left in a broken state.

To prevent running into rate limiting on the AWS API you can change the default polling interval of 1000 ms using the parameter pollIntervall. Using the value 0 disables event printing.

def outputs = cfnUpdate(stack:'my-stack', file:'template.yaml', params:['InstanceType=t2.nano'], keepParams:['Version'], timeoutInMinutes:10, tags:['TagName=Value'], notificationARNs:['arn:aws:sns:us-east-1:993852309656:topic'], pollInterval:1000)

or the parameters can be specified as a map:

def outputs = cfnUpdate(stack:'my-stack', file:'template.yaml', params:['InstanceType': 't2.nano'], keepParams:['Version'], timeoutInMinutes:10, tags:['TagName=Value'], notificationARNs:['arn:aws:sns:us-east-1:993852309656:topic'], pollInterval:1000)

Alternatively, you can specify a URL to a template on S3 (you'll need this if you hit the 51200 byte limit on template):

def outputs = cfnUpdate(stack:'my-stack', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml')

By default the cfnUpdate step creates a new stack if the specified stack does not exist, this behaviour can be overridden by passing create: 'false' as parameter :

def outputs = cfnUpdate(stack:'my-stack', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml', create: 'false')

In above example if my-stack already exists it would be updated and if it doesnt exist no actions would be performed.

In a case where CloudFormation needs to use a different IAM Role for creating the stack than the one currently in effect, you can pass the complete Role ARN to be used as roleArn parameter. i.e:

def outputs = cfnUpdate(stack:'my-stack', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml', roleArn: 'arn:aws:iam::123456789012:role/S3Access')

It's possible to override the behaviour of a stack when the creation fails by using "onFailure". Allowed values are DO_NOTHING, ROLLBACK, or DELETE Because the normal default value of ROLLBACK behaves strangely in a CI/CD environment. cfnUpdate uses DELETE as default.

def outputs = cfnUpdate(stack:'my-stack', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml', onFailure:'DELETE')

You can specify rollback triggers for the stack update:

def outputs = cfnUpdate(stack:'my-stack', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml', rollbackTimeoutInMinutes: 10, rollbackTriggers: ['AWS::CloudWatch::Alarm=arn:of:cloudwatch:alarm'])

When creating a stack, you can activate termination protection by using the enableTerminationProtection field:

def outputs = cfnUpdate(stack:'my-stack', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml', enableTerminationProtection: true)

Note: When creating a stack, either file or url are required. When updating it, omitting both parameters will keep the stack's current template.

cfnDelete

Remove the given stack from CloudFormation.

To prevent running into rate limiting on the AWS API you can change the default polling interval of 1000 ms using the parameter pollIntervall. Using the value 0 disables event printing.

Note: When deleting a stack only 'stack' parameter is required.

cfnDelete(stack:'my-stack', pollInterval:1000, retainResources :['mylogicalid'], roleArn: 'my-arn', clientRequestToken: 'my-request-token')

cfnDescribe

The step returns the outputs of the stack as map.

def outputs = cfnDescribe(stack:'my-stack')

cfnExports

The step returns the global CloudFormation exports as map.

def globalExports = cfnExports()

cfnCreateChangeSet

Create a change set to update the given CloudFormation stack using the given template from the workspace. You can specify an optional list of parameters, either as a key/value pair or a map. You can also specify a list of keepParams of parameters which will use the previous value on stack updates.

If you have many parameters you can specify a paramsFile containing the parameters. The format is either a standard JSON file like with the cli or a YAML file for the cfn-params command line utility.

Additionally you can specify a list of tags that are set on the stack and all resources created by CloudFormation.

The step returns the outputs of the stack as a map. It also contains special values prefixed with jenkins:

  • jenkinsStackUpdateStatus - "true"/"false" whether the stack was modified or not

To prevent running into rate limiting on the AWS API you can change the default polling interval of 1000 ms using the parameter pollIntervall. Using the value 0 disables event printing.

cfnCreateChangeSet(stack:'my-stack', changeSet:'my-change-set', file:'template.yaml', params:['InstanceType=t2.nano'], keepParams:['Version'], tags:['TagName=Value'], notificationARNs:['arn:aws:sns:us-east-1:993852309656:topic'], pollInterval:1000)

or the parameters can be specified as a map:

cfnCreateChangeSet(stack:'my-stack', changeSet:'my-change-set', file:'template.yaml', params:['InstanceType': 't2.nano'], keepParams:['Version'], tags:['TagName=Value'], notificationARNs:['arn:aws:sns:us-east-1:993852309656:topic'], pollInterval:1000)

Alternatively, you can specify a URL to a template on S3 (you'll need this if you hit the 51200 byte limit on template):

cfnCreateChangeSet(stack:'my-stack', changeSet:'my-change-set', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml')

or specify a raw template:

cfnCreateChangeSet(stack:'my-stack', changeSet:'my-change-set', template: 'my template body')

By default the cfnCreateChangeSet step creates a change set for creating a new stack if the specified stack does not exist, this behaviour can be overridden by passing create: 'false' as parameter :

cfnCreateChangeSet(stack:'my-stack', changeSet:'my-change-set', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml', create: 'false')

In above example if my-stack already exists, a change set stack with change set will be created, and if it doesnt exist no actions would be performed.

In a case where CloudFormation needs to use a different IAM Role for creating or updating the stack than the one currently in effect, you can pass the complete Role ARN to be used as roleArn parameter. i.e:

cfnCreateChangeSet(stack:'my-stack', changeSet:'my-change-set', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml', roleArn: 'arn:aws:iam::123456789012:role/S3Access')

You can specify rollback triggers for the stack update:

cfnCreateChangeSet(stack:'my-stack', changeSet:'my-change-set', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml', rollbackTimeoutInMinutes: 10, rollbackTriggers: ['AWS::CloudWatch::Alarm=arn:of:cloudwatch:alarm'])

Note: When creating a change set for a non-existing stack, either file or url are required. When updating it, omitting both parameters will keep the stack's current template.

cfnExecuteChangeSet

Execute a previously created change set to create or update a CloudFormation stack. All the necessary information, like parameters and tags, were provided earlier when the change set was created.

To prevent running into rate limiting on the AWS API you can change the default polling interval of 1000 ms using the parameter pollIntervall. Using the value 0 disables event printing.

def outputs = cfnExecuteChangeSet(stack:'my-stack', changeSet:'my-change-set', pollInterval:1000)

cfnUpdateStackSet

Create a stack set. Similar options to cfnUpdate. Will monitor the resulting StackSet operation and will fail the build step if the operation does not complete successfully.

To prevent running into rate limiting on the AWS API you can change the default polling interval of 1000 ms using the parameter pollIntervall. Using the value 0 disables event printing.

  cfnUpdateStackSet(stackSet:'myStackSet', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml')

To set a custom administrator role ARN:

  cfnUpdateStackSet(stackSet:'myStackSet', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml', administratorRoleArn: 'mycustomarn')

To set a operation preferences:

  cfnUpdateStackSet(stackSet:'myStackSet', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml', operationPreferences: [failureToleranceCount: 5])

When the stack set gets really big, the recommendation from AWS is to batch the update requests. This option is not part of the AWS API, but is an implementation to facilitate updating a large stack set. To automatically batch via region (find all stack instances, group them by region, and submit each region separately): (

  cfnUpdateStackSet(stackSet:'myStackSet', url:'https://s3.amazonaws.com/my-templates-bucket/template.yaml', batchingOptions: [regions: true])

cfnDeleteStackSet

Deletes a stack set.

To prevent running into rate limiting on the AWS API you can change the default polling interval of 1000 ms using the parameter pollIntervall. Using the value 0 disables event printing.

  cfnDeleteStackSet(stackSet:'myStackSet')

snsPublish

Publishes a message to SNS. Note that the optional parameter messageAttributes is assuming string only values.

snsPublish(topicArn:'arn:aws:sns:us-east-1:123456789012:MyNewTopic', subject:'my subject', message:'this is your message', messageAttributes: ['k1': 'v1', 'k2': 'v2'])

deployAPI

Deploys an API Gateway definition to a stage.

deployAPI(api:'myApiId', stage:'Prod')

Additionally you can specify a description and stage variables.

deployAPI(api:'myApiId', stage:'Prod', description:"Build: ${env.BUILD_ID}", variables:['key=value'])

createDeployment

Deploys an application revision through the specified deployment group (AWS CodeDeploy)

From S3 bucket:

createDeployment(
        s3Bucket: 'jenkins.bucket',
        s3Key: 'artifacts/SimpleWebApp.zip',
        s3BundleType: 'zip', // [Valid values: tar | tgz | zip | YAML | JSON]
        applicationName: 'SampleWebApp',
        deploymentGroupName: 'SampleDeploymentGroup',
        deploymentConfigName: 'CodeDeployDefault.AllAtOnce',
        description: 'Test deploy',
        waitForCompletion: 'true',
        //Optional values 
        ignoreApplicationStopFailures: 'false',
        fileExistsBehavior: 'OVERWRITE'// [Valid values: DISALLOW, OVERWRITE, RETAIN]
)

From GitHub:

createDeployment(
        gitHubRepository: 'MykhayloGnylorybov/AwsCodeDeployArtifact',
        gitHubCommitId: 'e9ee742f44c9a0f97ee3aa94593e7b6aad6e2d14',
        applicationName: 'SampleWebApp',
        deploymentGroupName: 'SampleDeploymentGroup',
        deploymentConfigName: 'CodeDeployDefault.AllAtOnce',
        description: 'Test deploy',
        waitForCompletion: 'true'
)

awaitDeploymentCompletion

Awaits for a CodeDeploy deployment to complete.

The step runs within the withAWS block and requires only one parameter:

  • deploymentId (the AWS CodeDeploy deployment id: e.g. 'd-3GR0HQLDN')

Simple await:

awaitDeploymentCompletion('d-3GR0HQLDN')

Timed await:

timeout(time: 15, unit: 'MINUTES'){
    awaitDeploymentCompletion('d-3GR0HQLDN')
}

listAWSAccounts

Retrieves the list of all AWS accounts of the organization. This step can only be run in the master account.

The step returns an array of Account objects with the following fields:

  • id - the account id
  • arn - the organizations ARN
  • name - the account name
  • safeName - the name converted to only contain lower-case, numbers and hyphens
  • status - the account status
def accounts = listAWSAccounts()

You can specify a parent id (Root, Orga unit) with the optional parameter parent

def accounts = listAWSAccounts('ou-1234-12345678')

updateIdP

Create or update a SAML identity provider with the given metadata document.

The step returns the ARN of the created identity provider.

def idp = updateIdP(name: 'nameToCreateOrUpdate', metadata: 'pathToMetadataFile')

updateTrustPolicy

Update the assume role trust policy of the given role using the provided file.

updateTrustPolicy(roleName: 'SomeRole', policyFile: 'path/to/somefile.json')

setAccountAlias

Create or update the AWS account alias.

setAccountAlias(name: 'awsAlias')

ecrDeleteImages

Delete images in a repository.

ecrDeleteImages(repositoryName: 'foo', imageIds: ['imageDigest': 'digest', 'imageTag': 'tag'])

ecrListImages

List images in a repository.

def images = ecrListImages(repositoryName: 'foo')

ecrLogin

Create login string to authenticate docker with the ECR.

The step returns the shell command to perform the login.

def login = ecrLogin()

For older versions of docker that need the email parameter use:

def login = ecrLogin(email:true)

It's also possible to specify AWS accounts to perform ECR login into:

def login = ecrLogin(registryIds: ['123456789', '987654321'])

ecrSetRepositoryPolicy

Sets the json policy document containing ECR permissions.

  • registryId - The AWS account ID associated with the registry that contains the repository.
  • repositoryName - The name of the repository to receive the policy.
  • policyText - The JSON repository policy text to apply to the repository. For more information, see Amazon ECR Repository Policy Examples in the Amazon Elastic Container Registry User Guide.

The step returns the object returned by the command.

  • Note - make sure you set the correct region in the credentials in order to find the repository
def result = ecrSetRepositoryPolicy(registryId: 'my-registryId',
                                     repositoryName: 'my-repositoryName',
                                     policyText: 'json-policyText'
)
def policyFile ="${env.WORKSPACE}/policyText.json"
def policyText = readFile file: policyFile
def result = ecrSetRepositoryPolicy(registryId: 'my-registryId',
                                     repositoryName: 'my-repositoryName',
                                     policyText: policyText
)

invokeLambda

Invoke a Lambda function.

The step returns the object returned by the Lambda.

def result = invokeLambda(
	functionName: 'myLambdaFunction',
	payload: [ "key": "value", "anotherkey" : [ "another", "value"] ]
)

Alternatively payload and return value can be Strings instead of Objects:

String result = invokeLambda(
	functionName: 'myLambdaFunction',
	payloadAsString: '{"key": "value"}',
	returnValueAsString: true
)

lambdaVersionCleanup

Cleans up lambda function versions older than the daysAgo flag. The main use case around this is for tooling like AWS Serverless Application Model. It creates lambda functions, but marks them as DeletionPolicy: Retain so the versions are never deleted. Overtime, these unused versions will accumulate and the account/region might hit the limit for maximum storage of lambda functions.

lambdaVersionCleanup(
	functionName: 'myLambdaFunction',
	daysAgo: 14
)

To discover and delete all old versions of functions created by a AWS CloudFormation stack:

lambdaVersionCleanup(
	stackName: 'myStack',
	daysAgo: 14
)

ec2ShareAmi

Share an AMI image to one or more accounts

ec2ShareAmi(
    amiId: 'ami-23842',
    accountIds: [ "0123456789", "1234567890" ]
)

elbRegisterInstance

Registers a target to a Target Group.

elbRegisterInstance(
    targetGroupARN: 'arn:aws:elasticloadbalancing:us-west-2:123456789:targetgroup/my-load-balancer/123456789',
    instanceID: 'i-myid',
    port: 8080
)

elbDeregisterInstance

Deregisters a target from a Target Group.

elbDeregisterInstance(
    targetGroupARN: 'arn:aws:elasticloadbalancing:us-west-2:123456789:targetgroup/my-load-balancer/123456789',
    instanceID: 'i-myid',
    port: 8080
)

elbIsInstanceRegistered

Check if target has registered and healthy.

The step returns true or false.

elbIsInstanceRegistered(
    targetGroupARN: 'arn:aws:elasticloadbalancing:us-west-2:123456789:targetgroup/my-load-balancer/123456789',
    instanceID: 'i-myid',
    port: 8080
)

elbIsInstanceDeregistered

Check if target has completed removed from the Target Group.

The step returns true or false.

elbIsInstanceDeregistered(
    targetGroupARN: 'arn:aws:elasticloadbalancing:us-west-2:123456789:targetgroup/my-load-balancer/123456789',
    instanceID: 'i-myid',
    port: 8080
)

ebCreateApplication

Creates a new Elastic Beanstalk application.

Arguments:

  • applicationName (Required) - Name of the application to be created
  • description - Descriptive text to add to the application

AWS reference

ebCreateApplication(
    applicatName: "my-application",
    description: "My first application"
)

ebCreateApplicationVersion

Creates a new deployable version for an existing Elastic Beanstalk application. This version created is based on files uploaded to an S3 bucket, that are used to create a deployable version of the application. This version label can be used to deploy a new environment.

Arguments:

  • applicationName (Required) - Name of the application where the new version should be created
  • versionLabel (Required) - Name of the version to be created
  • s3Bucket (Required) - Name of the S3 Bucket where the source code / executable of this version exists
  • s3Key: (Required) - Path in the S3 Bucket where the source code / executable of this version exists
  • description - Descriptive text of the application version

AWS reference

ebCreateApplicationVersion(
    applicationName: "my-application",
    versionLabel: "my-application-1.0.0",
    s3Bucket: "my-bucket",
    s3Key: "my-application.jar",
    description: "My first application version"
)

ebCreateConfigurationTemplate

Creates a new deployable version for an existing Elastic Beanstalk application. This version created is based on files uploaded to an S3 bucket, that are used to create a deployable version of the application. This version label can be used to deploy a new environment.

Arguments:

  • applicationName (Required) - Name of the application where the new configuration template should be created
  • templateName (Required) - Name of the configuration template to be created
  • environmentId - Id of the environment to use as a source for the new configuration template. Required if no solutionStackName or sourceConfiguration are provided
  • solutionStackName - Solution stack string for the new configuration template. List of supported platforms can be seen in AWS. Required if no environmentId or sourceConfiguration are provided
  • sourceConfigurationApplication - Name of the application that has the source configuration to copy over. Should be used in conjunction with sourceConfigurationTemplate. Required if no environmentId or solutionStackName are provided
  • sourceConfigurationTemplate - Name of the configuration to be used as a source for the new configuration template. Should be used in conjunction with sourceConfigurationApplication. Required if no environmentId or solutionStackName are provided
  • description - Descriptive text of the application configuration template

AWS reference

// Create configuration template based on existing environment
ebCreateConfigurationTemplate(
    applicationName: "my-application",
    templateName: "my-application-production-template",
    environmentId: "my-application-production",
    description: "Configuration template for the production environment of my application"
)

// Create configuration template based on a solution stack
ebCreateConfigurationTemplate(
    applicationName: "my-application",
    templateName: "my-application-production-template",
    solutionStackName: "64bit Amazon Linux 2018.03 v3.3.9 running Tomcat 8.5 Java 8",
    description: "Configuration template for the production environment of my application"
)

// Create configuration template based on an existing configuration template
ebCreateConfigurationTemplate(
    applicationName: "my-application",
    templateName: "my-application-production-template",
    sourceConfigurationApplication: "my-other-application",
    sourceConfigurationTemplate: "my-other-application-production-template",
    description: "Configuration template for the production environment of my application"
)

ebCreateEnvironment

Creates a new environment for an existing Elastic Beanstalk application. This environment can be created based on existing configuration templates and application versions for that application.

Arguments:

  • applicationName (Required) - Name of the application where the new environment should be created
  • environmentName (Required) - Name of the environment to be created
  • templateName - Name of the configuration template to use with the environment to be created. Mutually exclusive with solutionStackName
  • solutionStackName - Solution stack string for the new environment. List of supported platforms can be seen in AWS. Mutually exclusive with templateName
  • versionLabel - Name of the application version to be deployed in the new environment
  • updateOnExisting - If set to false the command will throw an exception if the environment already exists. Otherwise, in case the environment already exists, it will be updated. Defaults to true
  • description - Descriptive text of the environment

AWS reference

// Create environment from existing configuration template
ebCreateEnvironment(
    applicationName: "my-application",
    environmentName: "production",
    templateName: "my-application-production-template",
    versionLabel: "my-application-1.0.0",
    description: "Production environment of my application"
)

// Create environment with no configuration template, using a Supported Platform string
ebCreateEnvironment(
    applicationName: "my-application",
    environmentName: "production",
    solutionStackName: "64bit Amazon Linux 2018.03 v3.3.9 running Tomcat 8.5 Java 8",
    versionLabel: "my-application-1.0.0",
    description: "Production environment of my application"
)

ebSwapEnvironmentCNAMEs

Swaps the CNAMEs of the environments. This is useful for Blue-Green deployments.

Arguments:

  • sourceEnvironmentId - Id of the source environment. Should be used with destinationEnvironmentId
  • sourceEnvironmentName - Name of the source environment. Should be used with destinationEnvironmentName
  • sourceEnvironmentCNAME - CNAME of the source environment. If provided, it will be used to lookup the id and name of the source environment.
  • destinationEnvironmentId - Id of the destination environment. Should be used with sourceEnvironmentId
  • destinationEnvironmentName - Name of the destination environment. Should be used with sourceEnvironmentName
  • destinationEnvironmentCNAME - CNAME of the destunatuin environment. If provided, it will be used to lookup the id and name of the destination environment.

AWS reference

// Swap CNAMEs using Ids
ebSwapEnvironmentCNAMEs(
    sourceEnvironmentId: "e-65abcdefgh",
    destinationEnvironmentId: "e-66zxcvbdg"
)

// Swap CNAMEs using the environment names
ebCreateEnvironment(
    sourceEnvironmentName: "production",
    destinationEnvironmentName: "production-2"
)

// Swap CNAMEs using the source environment name and destination environment CNAME
ebCreateEnvironment(
        sourceEnvironmentName: "green",
        destinationEnvironmentCNAME: "production.eu-west-1.elasticbeanstalk.com"
)

ebWaitOnEnvironmentStatus

Waits for environment to be in the specified status.

This can be used to ensure that the environment is ready to accept commands, like an update, or a termination command. Be aware this does not guarantee that the application has finished starting up. If an application has a long startup time, the environment will be ready for new commands before the application has finished the boot.

Arguments:

  • applicationName - Name of the application of that environment
  • environmentName - Name of the environment
  • status - Status to wait for. Valid values: Launching | Updating | Ready | Terminating | Terminated. Defaults to Ready
// Wait for environment to be ready for new commands
ebWaitOnEnvironmentStatus(
    applicationName: "my-application",
    environmentName: "production"
)

// Wait for environment to be terminated
ebWaitOnEnvironmentStatus(
    applicationName: "my-application",
    environmentName: "temporary",
    status: "Terminated"
)

ebWaitOnEnvironmentHealth

Waits for environment to reach the desired health status, and remain there for a minimum amount of time.

This can be used to ensure that the environment has finished the startup process, and that the web application is ready and available.

Arguments:

  • applicationName (Required) - Name of the application of that environment
  • environmentName (Required) - Name of the environment
  • health - Health status to wait for. Valid values: Green | Yellow | Red | Grey. Defaults to Green
  • stabilityThreshold - Amount of time (in seconds) to wait before considering the status stable. Can be disabled by setting it to 0. Defaults to 60
// Wait for environment health to be green for at least 1 minute
ebWaitOnEnvironmentHealth(
    applicationName: "my-application",
    environmentName: "production"
)

// Detect immediately if environment becomes red
ebWaitOnEnvironmentHealth(
    applicationName: "my-application",
    environmentName: "temporary",
    health: "Red",
    stabilityThreshold: 0
)

Changelog

current master

1.44

  • Fix global configuration naming for JCasC. Please note that this is a breaking change if JCasC is defined. This can be fixed by renaming pluginImpl --> pipelineStepsAWS.
  • Fix Elastic Beanstalk client creation bug that ignored provided configurations in the withAWSStep
  • Fix upload tags if file is a directory
  • Add CNAME parameters to Elastic Beanstalk ebSwapEnvironmentCNAMEs command that lookup the required id and name params

1.43

  • Add Elastic Beanstalk steps (ebCreateApplication, ebCreateApplicationVersion, ebCreateConfigurationTemplate, ebCreateEnvironment, ebSwapEnvironmentCNAMEs, ebWaitOnEnvironmentStatus, ebWaitOnEnvironmentHealth)
  • Fix documentation for lambdaVersionCleanup
  • Fix wrong partition detection when assuming role
  • Fix resource listing for lambdaVersionCleanup when using a cloudformation stack with lots of resources
  • Fix issues around S3UploadFile with text string argument
  • Fix cfnExecuteChangeSet to correctly handle no resource change, but updates to outputs (#210)

1.42

1.41

  • Add batching support for cfnUpdateStackSet
  • Retry stack set deployments on LimitExceededException when there are too many StackSet operations occuring.

1.40

  • add registryIds argument to ecrLogin
  • fix CloudFormation CreateChangeSet for a stack with IN_REVIEW state
  • Add lambdaCleanupVersions
  • Add ecrSetRepositoryPolicy

1.39

  • add notificationARNs argument to cfnUpdate and cfnUpdateStackSet
  • Handle Stopped status for CodeDeployment deployments

1.38

  • Add ecrListImages
  • Add ecrDeleteImages
  • Fix instances of TransferManger from aws-sdk were never closed properly
  • add s3DoesObjectExist step

1.37

  • add parent argument to listAWSAccounts
  • Add Xerces dependency to fix #117
  • Add ability to upload a String to an S3 object by adding text option to s3Upload
  • Add redirect location option to s3Upload
  • Add support for SessionToken when using iamMfaToken #170

1.36

  • add jenkinsStackUpdateStatus to stack outputs. Specifies if stack was modified
  • Increase AWS SDK retry count from default (3 retries) to 10 retries
  • Add CAPABILITY_AUTO_EXPAND to Cloudformation Stacks and Stacksets

1.35

  • fixing regression that region was now mandatory on withAWS
  • fix s3Upload step doesn't allow to set object metadata with values containing colon character (#141)

1.34 (use >= 1.35!!)

  • add support for assume role with SAML assertion authentication (#140)

1.33

  • fix timeout settings for cfnExecuteChangeSet (#132)
  • fixed tagsFile parameter being ignored when the tags parameter equals null.
  • fixed error string matching for empty change set in cloudformation
  • automatically derive partition from region (#137)

1.32

  • add paging for listAWSAccounts (#128)
  • retry stackset update on StaleRequestException
  • add support for OperationPreferences in cfnUpdateStackSet

1.31

  • handle throttles from cloudformation stackset operations
  • fixed regression in cfnUpdate

1.30 (use >= 1.31!!)

  • allow the customization of setting a roleSessionName
  • content encoding can be specified in s3Upload step
  • allow configuration of cloudformation stack timeout

1.29

  • fix issues with stack timeouts

1.28

  • use SynchronousNonBlockingStepExecution for long running AWS steps to allow the pipeline step to be aborted
  • use custom polling strategy for cloudformation waiters to speed up pipeline feedback from cloudformation changes
  • add support for tagsFile in cfnUpdate, cfnCreateChangeSet, cfnUpdateStackSet
  • add administratorRoleArn to cfnUpdateStackSet

1.27

  • add rollback configuration to cfnUpdate
  • add enableTerminationProtection to cfnUpdate
  • add retries around cfnUpdateStackSet when stack set is currently busy
  • add s3Copy step
  • allow upload of single files to bucket root

1.26

  • add duration to withAWS
  • add sseAlgorithm to s3Upload
  • add messageAttributes in snsPublish
  • add ability to utilize AWS Credentials Plugin
  • add iamMfaToken to withAWS step

1.25

  • Return ValidateTemplate response on cfnValidate
  • Add s3PresignURL
  • use SynchronousNonBlockingStepExecution for some steps for better error handling
  • allow s3Delete to empty bucket (#63)
  • set minimal Jenkins version to 2.60.3 and switch to Java 8
  • fix cfnExecuteChange step (#67)

1.24

  • Do not fail job on empty change set creation
  • Add support for maps with cloudformation parameters.
  • Allow cfnCreateStackSet, cfnUpdate, cfnCreateChangeSet to take a raw (string) template
  • add ec2ShareAmi step

1.23

  • add updateTrustPolicy step (#48)
  • fix NPE in ProxyConfiguration (#51)
  • fix strange upload behavior when uploading file to path (#53)
  • add support for Stacksets
  • return change set from step

1.22

  • Add kmsId parameter to s3Upload.
  • Fix more characters in RoleSessionName
  • Allow upload of multiple files to bucket root (#41)
  • Use DELETE method for failed stack creation. (Changed behavior)
  • Use Jenkins proxy config when available
  • retrieve all CloudFormation exports (#42)
  • s3Upload returns the S3 URL of the target

1.21

  • Fix: s3Upload did not work in Jenkins 2.102+ (#JENKINS-49025)
  • Fix: RoleSessionName (slashes in buildNumber) in withAWS step for assume role. (#JENKINS-45807)
  • Doc: Clarify usage of metadata

1.20

  • Fix: setAccountAlias broken during code cleanup

1.19 (use >= 1.20!!)

  • Fix: RoleSessionName (decoding job name HTML url encoding) in withAWS step for assume role.
  • Add onFailure option when creating a stack to allow changed behaviour.
  • Add the possibility to define specific content-type for s3Upload step.
  • Support roleArns with paths
  • add setAccountAlias step

1.18

  • Fixed regression added by #27 (#JENKINS-47912)

1.17 (use >= 1.18!!)

  • Add policy for withAWS support - allows an additional policy to be combined with the policy associated with the assumed role.
  • Add cfnCreateChangeSet step
  • Add cfnExecuteChangeSet step
  • Add endpoint-url for withAWS support - allows configuring a non-AWS endpoint for internally-hosted clouds.
  • Add support for String payload and return value in invokeLambda step
  • Support additional S3 options: pathStyleAccessEnabled and payloadSigningEnabled
  • Update AWS SDK to 1.11.221
  • Fix: return value of invokeLambda is now serializable

1.16

  • Add federatedUserId for withAWS support - generates temporary aws credentials for federated user which gets logged in CloudTrail
  • Add return value to awsIdentity step
  • Add ecrLogin step
  • Add invokeLambda step
  • Add cacheControl to s3Uploadstep

1.15

  • Add the following options to S3Upload : workingDir, includePathPattern, excludePathPattern, metadatas and acl

1.14

  • fixes JENKINS-45964: Assuming Role does not work in AWS-China
  • Allow opt out for by-default stack creation with cfnUpdate
  • roleArn parameter support for cfnUpdate
  • Fix: Rendering the paths for S3* steps manually (Windows)
  • fixes JENKINS-46247: Fix credentials scope in withAWS step and add a credentials dropdown
  • add safeName to listAWSAccounts step

1.13

  • Add s3FindFiles step
  • add updateIdP step
  • Fix creation of RoleSessionName
  • Fix bug when missing DescribeStacks permission

1.12

  • Make polling interval for CFN events configurable #JENKINS-45348
  • Add awaitDeploymentCompletion step
  • Add s3Delete step
  • Add listAWSAccounts step

1.11

  • Replace slash in RoleSessionName coming from Job folders

1.10

  • improve S3 download logging #JENKINS-44903
  • change RoleSessionName to include job name and build number
  • add the ability to use a URL in cfnValidate

1.9

  • add support for create stack timeout
  • add the ability to use a URL in cfnUpdate
  • add deployAPI step

1.8

  • add support for externalId for role changes
  • allow path to be null or empty in S3 steps

1.7

  • fix environment for withAWS step
  • add support for recursive S3 upload/download

1.6

  • fix #JENKINS-42415 causing S3 errors on slaves
  • add paramsFile support for cfnUpdate
  • allow the use of Jenkins credentials for AWS access #JENKINS-41261

1.5

  • add cfnExports step
  • add cfnValidate step
  • change how s3Upload works to use the aws client to guess the correct content type for the file.

1.4

  • add empty checks for mandatory strings
  • use latest AWS SDK
  • add support for CloudFormation stack tags

1.3

  • add support for publishing messages to SNS
  • fail step on errors during CloudFormation actions

1.2

  • add proxy support using standard environment variables
  • add cfnDescribe step to fetch stack outputs

1.1

  • fixing invalidation of CloudFront distributions
  • add output of stack creation, updates and deletes
  • Only fetch AWS environment once
  • make long-running steps async

1.0

  • first release containing multiple pipeline steps

pipeline-aws-plugin's People

Contributors

alexey-pelykh avatar ashoofly avatar asztal avatar basil avatar conradolega avatar danbovey avatar davegallant avatar dohbedoh avatar drewsonne avatar helver avatar hoegertn avatar indyaah avatar jamesforsee avatar killerwhile avatar kylekluever avatar mbaitelman-bitburst-ops avatar meve avatar mykhaylognylorybov avatar nehagupta29 avatar nielslaukens avatar omervk avatar patope avatar res0nance avatar ryandavis84 avatar slourenco avatar statik avatar tekkamanendless avatar tylersouthwick avatar wimsymons avatar zbynek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pipeline-aws-plugin's Issues

Unable to pass variable in to folder name inside bucket

Hello,

I am trying to dynamically create a folder structure using s3upload by passing in an environment variable set earlier in my Jenkinsfile but it is not expanding the variable.

Code:

s3Upload (bucket: 'example', file: 'bundle-dist/folder/bundle.zip', path: 'Bundle/example/${env.example_variable}/bundle.zip')

I end up with (literal ${env.example_variable}):

s3://example/Bundle/example/${env.example_variable}/bundle.zip

withAWS step always use master's instance profile security token even running on slave.

jenkins version : 2.109
plugin version : 1.24

I found that withAWS step always use master's instance profile security token when build job is running on slave.

here is my pipeline code

node('slave-type2') {
    stage('Post Build.') {
        sh('aws sts get-caller-identity')
        def identity = awsIdentity()
        withAWS(role:'test', roleAccount:'012345679091') {
            files = s3FindFiles(bucket:'my-bucket', glob:'test.html')
            for (file in files) {
                println file.name
            }
        }
    }
}

here is console log

[Pipeline] node
Running on ubuntu 16.04 slave-type2 (sir-xxxx) in /mnt/jenkins/workspace/media-test/_test_be_jenkins_func
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Post Build.)
[Pipeline] sh
[_test_be_jenkins_func] Running shell script
+ aws sts get-caller-identity
{
    "Account": "012345679091",
    "UserId": "XXXXXXXXXXXXXXXX:i-xxxxxxxxxxxx", 
    "Arn": "arn:aws:sts::012345679091:assumed-role/mm-jenkins-ci-slave/i-xxxxxxxxxxxx"
}
[Pipeline] awsIdentity
Current AWS identity: 012345679091 - XXXXXXXXXXXXXXXX:i-xxxxxxxxxxxx - arn:aws:sts::012345679091:assumed-role/mm-jenkins-ci-master/i-xxxxxxxxxxxx
[Pipeline] withAWS
[Pipeline] // withAWS
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
[BFA] Scanning build for known causes...
[BFA] No failure causes found
[BFA] Done. 0s
com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException: User: arn:aws:sts::012345679091:assumed-role/mm-jenkins-ci-master/i-xxxxxxxxxxxxxxx is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::012345679091:role/test (Service: AWSSecurityTokenService; Status Code: 403; Error Code: AccessDenied; Request ID: xxxxxxxxxxxxxxxxxx)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
	at com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.doInvoke(AWSSecurityTokenServiceClient.java:1271)
	at com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.invoke(AWSSecurityTokenServiceClient.java:1247)
	at com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.executeAssumeRole(AWSSecurityTokenServiceClient.java:454)
	at com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.assumeRole(AWSSecurityTokenServiceClient.java:431)
	at de.taimos.pipeline.aws.WithAWSStep$Execution.withRole(WithAWSStep.java:310)
	at de.taimos.pipeline.aws.WithAWSStep$Execution.start(WithAWSStep.java:235)
	at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:229)
	at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:153)
	at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:108)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
	at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213)
	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022)
	at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42)
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:157)
	at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:133)
	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:155)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:159)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129)
	at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17)
	at WorkflowScript.run(WorkflowScript:4)
	at ___cps.transform___(Native Method)
	at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57)
	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109)
	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
	at com.cloudbees.groovy.cps.impl.ClosureBlock.eval(ClosureBlock.java:46)
	at com.cloudbees.groovy.cps.Next.step(Next.java:83)
	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174)
	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163)
	at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:122)
	at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:261)
	at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:19)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:35)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:32)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108)
	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:32)
	at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:174)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:331)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$200(CpsThreadGroup.java:82)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:243)
	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:231)
	at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:131)
	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Finished: FAILURE

Can you support aws region "cn-north-1"?

When I choose china aws in my config file, I got a problem "
The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId; Request ID: 8BC96FABC523A70E; S3 Extended Request ID: rWbKZrKu5W1QHlAQkZMQN8H7XQ2M5R61YJu1KYLRsUohN48/jKvPMKkLhcE5cDST9neJiV1ABEQ=)".
<I'm sure that my IAM permissions are correct.>

Request: Add Termination Protection to cfnUpdate

Hi,
I'd like to request an enhancement to the add an optional parameter for "enableTerminationProtection" to the cfnUpdate step. Right now we have to run an extra step after the stack creation to enable termination protection, but since it's a valid parameter of the CreateStack API call it would be nice to provide it as an option for the stack creation.

Documentation:
https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_CreateStack.html
Parameter: EnableTerminationProtection

Cannot upload on bucket root directory

I saw that there was an error like this and it was fixed on the version 1.22. But I'm using the version 1.26 and I'm still getting the message Path must not be null or empty when uploading file. It seems that if includePathPattern matches only 1 file it causes the error.

Here is the line I'm using:
s3Upload acl: 'PublicRead', bucket: "$bucket", includePathPattern: "${backupBaseName}*.tar.gz"

s3Upload cannot upload files outside of Jenkins Workspace

We have a use-case where we want to upload files from a volume mounted on the Jenkins slave to S3.

Mount shows the volume successfully mounted:

[pipeline] Running shell script
mount
/dev/vdb on /foo type ext4 (rw,relatime,data=ordered)

and I can ls the directory in a pipeline from the slave:

ls /foo
file1
file2
file3

but when trying to upload the directory using the plugin I get the following:

[Pipeline] s3Upload
Uploading file:/foo to s3://<bucket>/bar
Upload failed due to missing source file

This seems not to be an issue if I make a directory in the Jenkins workspace:

[pipeline] Running shell script
mkdir foo
[Pipeline] s3Upload
Uploading file:/home/jenkins/workspace/<pipeline>/foo/ to s3://<bucket name>/bar
Upload complete

The pipeline:

                stage('Upload to S3') {
                    withAWS(endpointUrl:'<endpoint>', credentials:'<s3 credentials>') {
                        s3Upload acl: 'Private', bucket: '<bucket>', file: "/foo/", path: 'bar/'
                    }

s3Upload() doesn't work if `includePathPattern` matches multiple files

Version 1.26

Assuming we have the following content in the build directory:

Houseparty-arm64-v8a.apk
Houseparty-armeabi-v7a.apk
Houseparty-x86.apk
mapping.txt

And we want to upload it to S3:

This works:

s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*-arm64-v8a.apk', workingDir: 'build')
s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*-armeabi-v7a.apk', workingDir: 'build')
s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*-x86.apk', workingDir: 'build')
s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*.txt', workingDir: 'build')

This doesn't work and only uploads mapping.txt:

s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*.apk', workingDir: 'build')
s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*.txt', workingDir: 'build')

This doesn't work either and doesn't upload anything:

s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*', workingDir: 'build')

SQS Publish Support

Is there any plans to implement a DSL method do publish events to SQS? It would very useful for a lot of assynchronous tasks. Implementation seems to be quite straightforward. Unfornatelly I'm not a Java programmer, otherwise I would be happy to provide a PR for it.

cfnExecuteChangeSet timing out when creating a new stack, but stack creates successfully

Hi,
We are running into an issue when creating new CFN stacks using cfnCreateChangeSet and cfnExecuteChangeSet. The change set for the new stack is created without issue, but when we execute that changeset we run into problems, but only when the stack is new. The stack creates without issue in CFN, but the Jenkins plugin does not detect that the creation is complete and just hangs until it eventually times out with com.amazonaws.waiters.WaiterTimedOutException: Reached maximum attempts without transitioning to the desired state

Looking at the logs in Jenkins I see the CREATE_COMPLETE event for the stack, but that's the last log entry before the hang and eventual timeout. After the stack is created we can rerun the build and there are 0 issue while updating the existing stack. We have only seen this issue while creating brand new stacks.

Unfortunately we can't use cfnUpdate to create the initial stack because we are using AWS::Serverless resources that require the use of change sets to deploy.

ExecuteChangeSet logic fails when stack is created instead of updated

The logic at the line below fails when the stack is created instead of updated:

new EventPrinter(this.client, this.listener).waitAndPrintStackEvents(this.stack, this.client.waiters().stackUpdateComplete(), pollIntervallMillis);

There should be an additional branch when the stack does not exist. Currently the jenkins job keeps waiting and can't continue when a new stack is created instead of being updated.

step to deploy lambda

Are there any plans to add a step to deploy lambdas similar to invokeLambda step. my use case is very simple, basically deploying lambda and create any event sources. I came across the https://github.com/XT-i/aws-lambda-jenkins-plugin and tried to use it with without(role: roleName) { deployLambda() } which sets the credentials as environment variables .But the aws-lambda-jenkins-plugin step cannot see the variables that are set after the Jenkins slave is started (container agent). So, I cannot use the plugin with aws role-based authentication. I am thinking to write my own step but want to check if it would make sense to write the step here in the plugin?

Question: How to use withAWS() in parallel

Hi,
I've noticed that when I execute statements similar to

parallel "us-east-1": {
  withAWS(region:"us-east-1") {
    //Do something in us-east-1
  }
}, "us-east-2: {
    withAWS(region:"us-east-2") {
    //Do something in us-east-2
  }
}

this doesn't seem to work. I'm guessing the command is setting an ENV var or something similar so the statements are interfering with each other when executed in parallel. Is that the case, and if so is there any way for me to structure this so that they can execute in parallel without impacting each other?

Can't upload a file on the bucket root directory

Everytime that I try to upload a single file (by using the attribute file or includePathPattern) it always shows me that I must define the path attribute and it can't be empty or start with /. I'm using the version 1.26.

Here is what I'm trying to do:
s3Upload acl: "PublicRead", bucket: bucket, file: "${backupFile}.tar.gz"

S3 upload fails to upload if only one file is matched by include.

Easy to reproduce with a local Minio S3 server:

withAWS(endpointUrl:'http://127.0.0.1:9000',credentials:'s3-repo-credentials'){   
   s3Upload bucket:'mybucket', includePathPattern: '*', path: 'anyfolder/', workingDir: 'anyfolder/'
}

if anyfolder/ does contain only one file, upload will fail saying there is a forbidden character.
Add any additional matching file in the folder, and the upload works fine.
There must be a difference between the single file and multiple files path somewhere down the line (haven't found it yet otherwise this would a PR and not an issue ;) ).

I'll attach more details as soon as I have some time to monitor the issue at the level of the Minio server (which incoming request gets there).

Hiding output of ecrLogin() when running sh

When doing the following

withAWS() {
def login = ecrLogin()
sh login
}

Is there a way of hiding the docker login string. I've tried the usual

def docker_login = sh(returnStdout: false, script: login)

But it still prints it in the console log

Allow to empty bucket

I tried to use s3Delete(bucket: 'my-bucket', path: '') to empty a bucket and remove all the files. But then it throws an error

The bucket you tried to delete is not empty

I don't want to delete the bucket, I want to delete the files in the bucket. I tried setting path to . but still the same thing. With / it didn't delete anything.

Any ideas?

Cannot upload to root of S3 bucket

I am using a S3 bucket to host a small website (fronted by CloudFront). I have an Angular CLI project that needs to be uploaded to the S3 bucket's root, but I cannot seem to get it working.

s3Upload(bucket: 'com.zezke.portal', includePathPattern:'**/*', workingDir:'dist')

results in java.lang.IllegalArgumentException: Path must not be null or empty when uploading file. Adding a path parameter and setting it to '.' or '/' does not work either.

s3Upload(bucket: 'com.zezke.portal', path: '.', includePathPattern:'**/*', workingDir:'dist')

This creates a folder in the S3 bucket, not quite the goal I want either.

Any idea how to upload a bunch of files to the root of an S3 bucket?

AWS s3Upload can not upload files with a directory name as the key

We're using the s3Upload functionality to upload html files to an S3 bucket.
That bucket is used by CloudFront to distribute that content.

So we upload public/index.html to an s3 key "public/index.html"
Additionally we also upload the public/index.html document to an s3 key "public/"
For customers browsing to http://somedns/public/ it allows them to see the page as will.
Without having to go http://somedns/public/index.html

To do this we use the following call:

s3Upload bucket: 'public-assets',path:'public/',file:'index.html',contentType:"text/html"

But with the latest version of the plugin this has stopped working.
We expect this to happen:

Uploading file:/var/lib/jenkins/workspace/job/public/index.html to s3://public-assets/public/ 
Finished: Uploading to public-assets/public/

While this happens:

Uploading file:/var/lib/jenkins/workspace/job/public/index.html to s3://public-assets/public/ 
Finished: Uploading to public-assets/public/index.html

It worked with version 1.18.
With version 1.25 it is broken.

Unable to upload files from jenkins worker

Hi there!

We used this plugin to upload files from the jenkins master to S3. Recently we tried the same Jenkinsfile on a worker node, but the file list seems to be always empty.

From reading the code it seems to me that the listing of files happens on the master (where there is nothing), but the upload actually tries to upload from a worker.

For me it looks like this is the code:

final List<FilePath> children = new ArrayList<>();
final FilePath dir;
if (workingDir != null && !"".equals(workingDir.trim())) {
dir = this.getContext().get(FilePath.class).child(workingDir);
} else {
dir = this.getContext().get(FilePath.class);
}
if (file != null) {
children.add(dir.child(file));
} else if (excludePathPattern != null && !excludePathPattern.trim().isEmpty()) {
children.addAll(Arrays.asList(dir.list(includePathPattern, excludePathPattern, true)));
} else {
children.addAll(Arrays.asList(dir.list(includePathPattern, null, true)));
}

If anyone knows how to fix this I would try to code it up :)

Respect timeoutInMinutes on cfnUpdate

The pipeline only waits for 10 minutes when a cfnUpdate is run. This is regardless of a higher been set in the timeoutInMinutes parameter. Like this:

    cfnUpdate stack: stackName,
            file: "${CLOUDFORMATION_DIR_PATH}/application.json",
            params: cfnParameters,
            timeoutInMinutes: 30,
            pollInterval: 20000

... and then you get

Setting up a polling strategy to poll every 30 seconds for a maximum of 10 minutes

Turns out that 10 minutes is hardcoded. I fixed that in the attached patch. This causes us a bit of flakyness (our instances takes 7 minutes to start).
I tried to do a PR, but when I try to push the branch I get a 403.

respect-timeout.patch.tar.gz

I was able to drop some really nasty bash scripts by using this plugin, and for that I'm very greatful. Keep up the good work!

lambda update-function-code support

I wonder if you have plans for support lambda update-function-code ? we do a lot of lambda development and it would be fantastic if the plugin supported lambda update-function-code.

also wonder if you know a alternative to do lambda update-function-code from the pipeline

Feature Request: Return full description of ChangeSet

After creating a change set, a List<Change> is returned.

Would it be possible to return the full result rather than just the changes?
The result provides a much more detailed output of the changeset including

  • Parameters
  • Error messages
  • Status

These are all very useful for our CI pipeline.

private List<Change> validateChangeSet(String changeSet) {

Add s3Copy step

As part of our process we have to copy files between S3 buckets, it would be nice to have a copy step.

I implemented this step, please see #93

NPE in ProxyConfiguration

For ver 1.22
During running 'withAWS' from pipeline, faced with

java.lang.NullPointerException
	at de.taimos.pipeline.aws.ProxyConfiguration.useJenkinsProxy(ProxyConfiguration.java:72)
	at de.taimos.pipeline.aws.ProxyConfiguration.configure(ProxyConfiguration.java:53)
	at de.taimos.pipeline.aws.AWSClientFactory.getClientConfiguration(AWSClientFactory.java:83)
	at de.taimos.pipeline.aws.AWSClientFactory.configureBuilder(AWSClientFactory.java:77)
	at de.taimos.pipeline.aws.AWSClientFactory.create(AWSClientFactory.java:64)
	at de.taimos.pipeline.aws.WithAWSStep$Execution.withRole(WithAWSStep.java:289)
	at de.taimos.pipeline.aws.WithAWSStep$Execution.start(WithAWSStep.java:235)
	at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:229)
	at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:153)
	at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:108)
	at sun.reflect.GeneratedMethodAccessor644.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
	at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213)
	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022)
	at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42)
	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:157)
	at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23)
	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:133)
	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:155)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:159)
	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129)
	at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17)
	at WorkflowScript.run(WorkflowScript:44)
	at org.jenkinsci.plugins.pipeline.modeldefinition.ModelInterpreter.delegateAndExecute(jar:file:/var/lib/jenkins/plugins/pipeline-model-definition/WEB-INF/lib/pipeline-model-definition.jar!/org/jenkinsci/plugins/pipeline/modeldefinition/ModelInterpreter.groovy:137)

For ver 1.20 it works well.

Support the AWS Credentials plugin

The AWS Credentials plugin (https://wiki.jenkins.io/display/JENKINS/CloudBees+AWS+Credentials+Plugin) provides a specific AWS credential type that includes access key ID and secret access key so that we don't have to hack around "username" and "password" in standard password credentials. The intent of that plugin is that all other AWS plugins should use it.

I currently have to duplicate my AWS users; I have a set that work with this plugin (pipeline-aws-plugin) and a set that work with everything else (using the AWS credentials plugin).

I'd like to add the AWS credentials plugin as a dependency of this one and support both the username/password method (for backward compatibility) as well as the AWS credentials (to be more in line with the AWS design).

Would that be okay with you guys?

s3Upload fails to upload file to the s3 bucket with Content-MD5 metadata

What is the expected result?

String md5Base64 = sh(
      returnStdout: true,
      script: 'openssl md5 -binary ./foo.zip | base64'
).trim()

s3Upload(
    file: './foo.zip',
    bucket: 'bar',
    path: 'foo/bar/foo.zip',
    metadatas: [
       "Content-MD5:${md5Base64}"
    ]                
)

s3Upload fuction uploads file to the s3 bucket

What happens instead?
Getting:

java.io.IOException: Bad file descriptor
at sun.nio.ch.FileChannelImpl.position0(Native Method)
at sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:264)
at com.amazonaws.internal.ResettableInputStream.(ResettableInputStream.java:113)
at com.amazonaws.internal.ResettableInputStream.(ResettableInputStream.java:101)
at com.amazonaws.internal.ResettableInputStream.newResettableInputStream(ResettableInputStream.java:300)
Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to JNLP4-connect connection from ip-10-114-129-97.ec2.internal/10.114.129.97:41483
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1693)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:310)
at hudson.remoting.Channel.call(Channel.java:908)
at hudson.FilePath.act(FilePath.java:986)
at hudson.FilePath.act(FilePath.java:975)
at de.taimos.pipeline.aws.S3UploadStep$Execution$1.run(S3UploadStep.java:250)
Caused: com.amazonaws.SdkClientException
at com.amazonaws.internal.ResettableInputStream.newResettableInputStream(ResettableInputStream.java:302)
at com.amazonaws.internal.ResettableInputStream.newResettableInputStream(ResettableInputStream.java:277)
at com.amazonaws.internal.ReleasableInputStream.wrap(ReleasableInputStream.java:129)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1606)
at com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadInOneChunk(UploadCallable.java:133)
at com.amazonaws.services.s3.transfer.internal.UploadCallable.call(UploadCallable.java:125)
at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:143)
at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:48)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Thank you in advance.

s3Download doesn't resolve `~` to home directory

So I'm trying to download to ~/.dbt directory but the checkout happens to <agent_home>/workspace/<job>/~/.dbt .

s3Download(file:'~/.dbt/profiles.yml', bucket:'mybucket', path: 'myconf.yml')

Is there a way to resolve the ~ to user home dir?

I'm running this as jenkins user in docker container. remoting:3.14 and Jenkins:2.89.3 and plugin:1.21.

Am I doing something wrong? or is there some way to resolve it through Groovy?

cfnCreateChangeSet and other Documentation

Issue: not enough documentation for arguments to pass into cfnCreateChangeSet

I'm trying to create a wrapper around the cfnCreateChangeSet but have had to spend quite a bit of time experimenting with options of "What-ifs" on whether I need to validate input parameters or if this will do all of the work for me.

Biggest problem I've found is the absence of documentation on the different template inputs.

AWS API Documentation

The AWS API Documentation for creating a change set has the following:

(The main focus is the conditionals under TemplateBody and TemplateURL)

TemplateBody
A structure that contains the body of the revised template, with a minimum length of 1 byte and a maximum length of 51,200 bytes. AWS CloudFormation generates the change set by comparing this template with the template of the stack that you specified.
Conditional: You must specify only TemplateBody or TemplateURL.
Type: String
Length Constraints: Minimum length of 1.
Required: No

TemplateURL
The location of the file that contains the revised template. The URL must point to a template (max size: 460,800 bytes) that is located in an S3 bucket. AWS CloudFormation generates the change set by comparing this template with the stack that you specified.
Conditional: You must specify only TemplateBody or TemplateURL.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 1024.
Required: No

UsePreviousTemplate
Whether to reuse the template that is associated with the stack to create the change set.
Type: Boolean
Required: No

Findings

My concern was with specifying more than one template source at a time from the options for cfnCreateChangeSet, of file, template, or url.

Precedence

From what I've experimented with, the precedence goes:

  1. template
  2. url
  3. file

Using empty strings in higher precedence items:

This is where stuff becomes a little interesting.

From what I think I found:

  1. Setting url to '' and defining a value for file will use the file
  2. Setting template to '' and defining a value for file will use the previous template
  3. Setting template to '' and defining a value for url will use the url
  4. Setting template and url to '' and defining a value for file will use the previous template

Possible Solution:

Can someone confirm the behavior that I've observed and include documentation within the repository reporting this data?

(I wanted to have it in writing somewhere so others don't have to do what I've done)

s3Upload cannot set content type

Hi,
I am attempting to upload a JSON file and set the content type to "application/json" but using the following syntax uploads the file with the default content type "application/octet-stream":

s3Upload(file:"$file_name", bucket:'bucketname', path:"$file_name", acl:'PublicRead', contentType:'application/json')

I also attempted to do it with the following syntax but I got an exception "java.io.IOException: Bad file descriptor":
s3Upload(file:"$file_name", bucket:'bucketname', path:"$file_name", acl:'PublicRead', metadatas:['Content-Type:application/json'])
s3Upload(file:"$file_name", bucket:'bucketname', path:"$file_name", acl:'PublicRead', metadatas:['ContentType:application/json'])

Am I doing something wrong?
Thanks for the help!

MissingPropertyException using groovy variables for credential id

I'm trying to use a variable to store my credentials ID in, and i'm getting groovy.lang.MissingPropertyException exceptions;

// An AWS type credential. This is used to manipulate AWS through terraform and the AWS CLI
def SECRET_AWS_CREDENTIALS_ID = "aws-credentials"

...

withAWS(region:'us-east-1', credentials: SECRET_AWS_CREDENTIALS_ID) {
    s3Upload(
        bucket: "xxxxxxx",
        path: "/",
        includePathPattern: '*'
    )
}
Also:   groovy.lang.MissingPropertyException: No such property: SECRET_AWS_CREDENTIALS_ID for class: WorkflowScript
        at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:53)
        at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.getProperty(ScriptBytecodeAdapter.java:458)
        at org.kohsuke.groovy.sandbox.impl.Checker$6.call(Checker.java:290)
        at org.kohsuke.groovy.sandbox.GroovyInterceptor.onGetProperty(GroovyInterceptor.java:68)
        at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onGetProperty(SandboxInterceptor.java:326)
        at org.kohsuke.groovy.sandbox.impl.Checker$6.call(Checker.java:288)
        at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:292)
        at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.getProperty(SandboxInvoker.java:29)
        at com.cloudbees.groovy.cps.impl.PropertyAccessBlock.rawGet(PropertyAccessBlock.java:20)
        at WorkflowScript.archiveArtifactsS3(WorkflowScript:246)
        at WorkflowScript.run(WorkflowScript:76)

Cannot connect to ceph object stroage

We use ceph as our internal s3 storage. When using this plugin, this error occurs:

Caused: com.amazonaws.SdkClientException: Unable to execute HTTP request: The target server failed to respond
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1116)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1066)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4365)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4312)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4306)
	at com.amazonaws.services.s3.AmazonS3Client.listObjects(AmazonS3Client.java:840)
	at de.taimos.pipeline.aws.S3FindFilesStep$Execution.run(S3FindFilesStep.java:216)
	at de.taimos.pipeline.aws.S3FindFilesStep$Execution.run(S3FindFilesStep.java:148)
	at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution$1$1.call(SynchronousNonBlockingStepExecution.java:49)
	at hudson.security.ACL.impersonate(ACL.java:260)
	at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution$1.run(SynchronousNonBlockingStepExecution.java:46)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

After some searching, this seems to be the most likely cause. Is it possible to add signer type argument to withAWS?

pass OperationPreferences parameters to the cfnUpdateStackSet

Hi,
Could you please clarify how I can provide OperationPreferences inside the cfnUpdateStackSet method?
I didn't find it in the documentation.
Also in the documentation present not correct information in the cfnCreateStackSet section. This method is absent in your plugin, and example is referencing to the cfnCreateChangeSet method.

Thanks .

how to simply withAWS using the role attached to the instance

My Jenkins is deployed on an ec2 instance that is deployed with an IAM role attached to it. So, I just want to run commands without specifying a role.

If I just do

withAWS(region:'eu-west-1') {
    // do something
}

without specifying any authentication or role, with that work? It seems weird that I'd have to specify role:'the_instance_role .....is that required? Can instance roles even be assumed in that fashion when the role is already attached to the instance?

Thanks!

s3Copy doesn't exist?

Hi folks, I'm trying to run a multipart pipeline deploy that uploads to one bucket at one point in the process (which works fine) and then copy from that bucket to another one later in the process.

Using v1.26 of this plugin, I'm getting the following error (snipped out steps that don't start with 's3'):

java.lang.NoSuchMethodError: No such DSL method 's3Copy' found among steps [
...
s3Delete, s3Download, s3FindFiles, s3PresignURL, s3Upload
...
]

Is this something wrong with the package or my installation? How can I tell?

I tried to look at differences between s3Copy & s3Upload but didn't see anything significant (at least to my untrained eye with this codebase).

While I can work around this by re-uploading or any number of other things, it is annoying to not be able to copy. Any help would be appreciated.

cfnUpdate trows WaiterUnrecoverableException

Hello,

When trying to run cfnUpdate I get WaiterUnrecoverableException, but when creating the stack by the Amazon console it is created without problem

###Details:
Version Pipeline: AWS Steps 1.27

I'm trying to execute:
cfnUpdate(stack:"${stack}", url:"${urlTemplate}", params: 'roleName':"${roleName}",'bucket':"${bucket}",'pathS3':"${pathS3}",'handler':"${handler}"],timeoutInMinutes:10)

Where
${stack} es un el nombre de la pila
${urlTemplate} es el link a la plantilla guardada en S3

and throws in the jenkins log

com.amazonaws.waiters.WaiterUnrecoverableException: Resource never entered the desired state as it failed.
at com.amazonaws.waiters.WaiterExecution.pollResource(WaiterExecution.java:78)
at com.amazonaws.waiters.WaiterImpl.run(WaiterImpl.java:88)
at com.amazonaws.waiters.WaiterImpl$1.call(WaiterImpl.java:110)
at com.amazonaws.waiters.WaiterImpl$1.call(WaiterImpl.java:106)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused: java.util.concurrent.ExecutionException
at org.apache.http.concurrent.BasicFuture.getResult(BasicFuture.java:71)
at org.apache.http.concurrent.BasicFuture.get(BasicFuture.java:84)
at de.taimos.pipeline.aws.cloudformation.EventPrinter.waitAndPrintEvents(EventPrinter.java:135)
at de.taimos.pipeline.aws.cloudformation.EventPrinter.waitAndPrintStackEvents(EventPrinter.java:92)
at de.taimos.pipeline.aws.cloudformation.CloudFormationStack.create(CloudFormationStack.java:119)
at de.taimos.pipeline.aws.cloudformation.CFNUpdateStep$Execution.whenStackMissing(CFNUpdateStep.java:125)
at de.taimos.pipeline.aws.cloudformation.AbstractCFNCreateStep$Execution$1.run(AbstractCFNCreateStep.java:137)

As a reference, my template is similar to:
image

Maybe someone can help me with this or recommend me some adjustment?

regards

Add option for Client Side Encryption with KMSId

Hi

I am trying upload encrypted files using client side encryption.

Using the AWS CLI this would be achieved by
sh /usr/local/bin/aws kms encrypt --key-id 0000-0000-0000-0000-0000000 --plaintext "fileb://file.key" --output text --query CiphertextBlob | base64 --decode > file.key.encrypted"

However to do the above requires shelling out to the local file system which is not ideal.

It would be awesome if this plugin could provide this functionality (I am aware that it provides server side encryption but that is not what I want).

Uploading 10 byte file took four minutes

I'm getting started with this plugin and it seems like there's something odd happening with performance. Here's my entire test pipeline:

pipeline {
    agent any

    options {
        timestamps()
        disableConcurrentBuilds()
        timeout(time: 1, unit: 'HOURS')
    }
    
    stages {
        stage('doit') {
            steps {
                bat returnStatus: true, script: 'echo "Hello" > test.txt'
                s3Upload acl: 'BucketOwnerFullControl', bucket: 'my-cool-bucket', file: 'test.txt', path: 'test.txt'
            }
        }
    }
}

When I ran this it took nearly four minutes to upload the test file:

[Pipeline] bat
00:00:05.357 [UploadToS3Test] Running batch script
00:00:09.359 
00:00:09.359 c:\jenkins\workspace\UploadToS3Test>echo "Hello"  1>test.txt 
[Pipeline] s3Upload
00:00:12.101 Uploading file:/c:/jenkins/workspace/UploadToS3Test/test.txt to s3://my-cool-bucket/test.txt 
00:04:06.186 Finished: Uploading to my-cool-bucket/test.txt
00:04:07.845 Upload complete

Is that really expected? Is there something else that could be causing the job to be so slow? There's no other logging information during that nearly four minute gap so I don't know where to begin looking. I am using Jenkins 2.113 and Pipeline: AWS Steps plugin 1.24, which are the latest as of today.

SSE-Required Bucket Policy Block S3Upload

We've got a bucket with a policy that enforces SSE on uploaded objects.

When we remove that policy, we can upload objects using S3Upload properly.

When we add the policy, we get a permissions error despite the fact that we're passing in the sseAlgorithm parameter to S3Upload.

We're using the following value: sseAlgorithm:'AES256'

When we use the shell step to use the command line aws interface, passing the the -sse AES256 parameter on the command line, the upload succeeds.

This feels like a problem in the S3Upload code, but I can't say that for certain. Can you confirm that S3Upload works properly when you provide an sseAlgorithm parameter with value AES256 when uploading to a bucket that has a bucket policy requiring AES256 encryption of all objects?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.