Git Product home page Git Product logo

bbrun's Introduction

Bitbucket Pipelines Runner

bbrun is a command line tool to execute Bitbucket Pipelines locally.

Build Statusnpm version

Install

Install bbrun with npm:

$ npm install -g bbrun

Usage

bbrun can execute any step defined in your bitbucket-pipelines.yml template:

pipelines:
  default:
    - step:
          name: hello
          image: ubuntu2
          script:
            - echo "hello world!"

Run bbrun straight from your project path:

$ bbrun hello
running "build" in "atlassian/default-image" image...
hello world!

Check the examples and its tests to learn different use cases.

Options

  Usage
    $ bbrun <step> <options>

  Options
      --template (-t), pipeline template, defaults to "bitbucket-pipelines.yml"
      --env (-e),  define environment variables for execution
      --dry-run (-d),  performs dry run, printing the docker command
      --interactive (-i), starts an interactive bash session in the container
      --ignore-folder (-f), adds the folder as an empty volume (useful for forcing pipeline to install packages etc)
      --help, prints this very guide

  Examples:
    Execute all steps in the default pipeline from bitbucket-pipelines.yml
      $ bbrun
      $ bbrun --template bitbucket-template.yml
      $ bbrun --pipeline default
    Execute a single step by its name
      $ bbrun test
      $ bbrun "Integration Tests"
    Execute steps from different pipelines
      $ bbrun test --pipeline branches:master
    Define an environment variable
      $ bbrun test --env EDITOR=vim
      $ bbrun test --env "EDITOR=vim, USER=root"

Caveats

  • Not all Bitbucket features are covered, check open issues for an overview of the roadmap.
  • Private images are supported, but the user has to login in the Docker Registry before executing bbrun (thus credentials in the file are ignored).

Build and Test

npm install && npm test

To execute the tests under examples (which are not run by CI yet):

npm run test-examples

Install locally

$ npm install && npm link

bbrun's People

Contributors

dependabot[bot] avatar lovato avatar lucianosantana avatar mserranom avatar peterdremstrup avatar skalt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

bbrun's Issues

Add Examples

Would be helpful to have examples available for documentation

Unable to handle parallel steps

Bitbucket can run steps in parallel (docs). Currently, bbrun expects each item in a pipeline to contain a script member. This causes parallel: [...steps] to fail.

bbrun should not start steps wich are used for deployment

I started a bbrun without parameters and the fist default step was successful.
But then also another default step wich has deployment: staging & trigger: manual set.
This should be prevented!

attached you will find a test bitbucket-pipline.yml:

pipelines:
  default:
    - step:
        name: Build & Test
        script:
          - /bin/echo "Build & Test"
    - step:
        name: Deploy to STAGING
        deployment: staging
        trigger: manual
        script:
          - /bin/echo "Deploy to STAGING - this should not be run!"
    - step:
        name: Deploy to LIVE
        deployment: production
        trigger: manual
        script:
          - /bin/echo "Deploy to LIVE - this should not be run!"

Using atlassian/pipelines-awscli generates an error

That particular container, which I use on bitbucket, works perfectly nice there.
But when running locally, I got an error.

Why? Because there is a mandatory entrypoint which runs and fails, perhaps because it is not inside Pipelines.

Solution: remove entrypoint on bbrun ... I am not sure if this has side effects, but by doing that, my problem was gone.

docker.js : 36
:run --rm -P --entrypoint="" -v ${pwd()}:${workDir} -w ${workDir} ${image} bash ${BUILD_SCRIPT};

add example how to run buildscript of a different branch

At the moment I am at a feature branch, say "feature-1".
But I want to test the scripts for a different branch, say "dev".

I tried
bbrun --pipline branches:dev
and
bbrun --pipline dev

but only the default scripts will be executed.

The bitbucket-pipelines.yml looks like:

image: node:6.10.3
clone: 
  depth: 5
pipelines: 
     
  branches:
    dev:
     -step:
        scripts:
          - echo dev
    master:
     -step:
        scripts:
          - echo master
    default:
     -step:
        scripts:
          - echo default

Can you explain how i can tell bbrun to excute the dev scripts ?
Thank you for your help !

add --keepContainerRunning option

Currently docker commands are run with --rm option, that destroys the container after execution.

A --keep-container will not destroy the containers, which is useful for debugging purposes.

Debug docker

Hello, this is not related to your add-on more a docker thing. but maybe you know the answer... is it possible to debug a pipeline script like in vagrant? In vagrant i can enter "vagrant ssh" and can go into the machine. Currently it sucks very when i wait 20 minutes for execution and then everything breaks up because of a error in the build script. I mean thanks to your module its much better then real testing but a command for going into the machine would be awesome.

New release for GH and NPM

Hi @mserranom . I think I got all access to everything.
On NPM the organization, for me, is empty.

There is the release.sh script, which I would assume to run locally to get something released to GH.
You also have Travis config file.

Can I think if GitHub actions, from Master, calling all steps AND releasing the package to NPMJS?
One thing I ended using in a few Node projects was to set the package.json version ONLY when releasing (like it was 1.0.0 and at release time become 1.0.20200120) and never push this back to the repo. Instead, just tag the repo with v1.0.20200120. Everything gets automatic then, and releases can be fully automatic from GH itself. Travis would not be needed anymore (or use Travis and forget GH).

What do you think? I would like to have latest master released to NPMJS.

Best
Marco

Enginen dependency too strict

Problem: The engines dependency is too strict in package.json, requiring node 8.5.0 exactly.

Recommended Solution: Be more forgiving. "node": ">= 8.5.0"

Specify custom WORKDIR ?

First off, thank you for your work creating this tools, it's made debugging BB pipelines locally so much easier.

I noticed that the working directory always seems to be /ws even if a different one is specified in the Dockerfile WORKDIR. Is there a command line flag or other option that can be setting to specific this?

I appreciate any help you can give me.

Unguarded TypeError: `Cannot read property 'image' of null`

I managed to create a null step in an otherwise valid bitbucket-pipelines.yml, and got the uninformative error above. I'm proposing (1) guarding each of the config[attr] || inScopeAttr in bbrun.js and (2) providing a traceback to the null/undefined part of a pipeline.

Argument --env dont work with advanced assignments

Many pipelines need a custom auth.json that will be passed as env to be secure. Ie:

        name: Initial Setup
        caches:
          - composer
        script:
          - echo "$AUTH" > auth.json
          - php -d memory_limit=-1 $(which composer) install --prefer-dist
        artifacts:
          - vendor/**

The parseVars function is very simple and need to be more complex for this scenarios.

"--network" option to specify a Docker network to connect the containers to

Often a Bitbucket pipeline has some sort of deployment step that'll execute some SSH commands on a remote server, rsync some files etc.

At the moment debugging this scenario is quite difficult if you have a local Docker container acting as the "remote server", but the containers running each step don't have access to this "remote server" because they run on different Docker networks.

One way of tackling this is by giving the ability to specify the Docker network to run the containers in, an example would be:

bbrun --network example-network --env "SSH_HOST=example, SSH_USER=bitbucket

Notice the --network example-network, this would allow each step to have access to your container running outside of bbrun using the container name of the "remote server" (assuming that your container is "example" and connected to the "example-network").

Docker containers aren't removed after run

Current Behavior:
Currently each time a pipeline is run, one docker container per step is created. The steps may install packages that utilize disk space. At the end of pipeline completion the containers aren't removed.

Expected Behavior:
After completion of pipeline run, the containers should be removed from docker, cleaning up any space utilized.

Homebrew binary

  • Create standalone binary
  • Create brew formula
  • Upload in every deployment

Use private images hosted in AWS

Hi, I tried to use Private images hosted by AWS ECR (EC2 Container Registry) as you can see in documentation, and I receive an error.
The context is, I have a custom pipelines image, hosted in private AWS repository, and I want to use it, to build code with controled environment.

executing step in "[object Object]" docker: invalid reference format. See 'docker run --help'.
If I use a public image, everything works ok.
Thanks

steps.forEach is not a function

returning following error steps.forEach is not a function when running the following

command:

bbrun --template test-pipelines.yml

test-pipelines.yml

image: node:10.14.1

definitions:
  caches:
    node: node_modules
  steps:
    - step: &Test
        name: Run Tests
        caches:
          - node
        script:
          - npm run test:pipeline-ob
          - npm run test:pipeline-webauth
pipelines:
  default:
    "**":
      - step: &Test

Fails with alpine node:12 images

using node:12-alpine as my base image
after running bbrun --pipeline branches:development I get the following error

  throw err;
  ^

Error: Cannot find module '/ws/bash'
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:982:15)
    at Function.Module._load (internal/modules/cjs/loader.js:864:27)
    at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:74:12)
    at internal/main/run_main_module.js:18:47 {
  code: 'MODULE_NOT_FOUND',
  requireStack: []
}```

deploy on tag

Currently deployment to npm.com is performed after a master push in Travis.

After this change:

  • Deployment should be performed after a tag is created (via github release)
  • Package version is taken from tag name

Fails with Alpine linux images (i.e Docker in Docker)

Used the hello-world bitbucket-pipelines.ymland changed the docker image to docker:stable
See more here

So the file ends-up like this:

pipelines:
  default:
    - step:
          image: docker:stable
          script:
            - echo "hello world!"

After running bbrun I get the following error:

executing step in "docker:stable"
/usr/local/bin/docker-entrypoint.sh: exec: line 35: bash: not found

If you check the line 35 of docker-entrypoint.sh file in the docker image repo you see it's just an "exec" command.

Checking on docker.js file, I see it uses bash as shell and Alpine Linux doesn't uses bash it uses sh instead.

I've tested changing the bash references with sh on docker.js and it works both for Ubuntu and Alpine images. Ideally we should point the shell to be used dynamically. Maybe using $SHELL or $0 enviroment variables. I've tried quickly but couldn't make it work. So I'll submit a Pull Request with what I have shortly.

Using BBRUN inside a VM behind Proxy

How can I pass --net=host to "docker run"?
Additionally, I think that if you send this all the time, there would be no harm at all even if you have a direct internet connection.

SSH Keys

It would be beneficial to add a flag for ssh keys, because a pipeline could be installing a private package.

Execution fails, but it works interactively

image

If I run it interactively, it works.

My script called deploy.sh
bbrun $1 $2 $3 --pipeline branches:$BRANCH --env "AWS_SECRET_ACCESS_KEY=xxx, AWS_ACCESS_KEY_ID=yyy"

I made a script to basically copy everything to a /tmp area, and only them run bb in it. Else, it ends changing my project files, since I do npm install and I kind of change a few file contents depending on the deployment environment (dev / prod strings, stuff like that). But when running on Bitbucket, everything happens in a docker and gets disposed.

Anyway, the problem is that it fails directly. I also commented out my aws commands on yml file and it keeps saying "my aws command" is faulty. Interactively, it works like a charm.

Docker run uses SH to run the .bbrun.sh script causing source commands to fail

Current Behaviour:
When the --interactive flag is passed, the container is run with /bin/bash. However in the course of a normal pipeline run, sh is used though the .bbrun.sh file has it's first line pointing to bash via #!/usr/bin/env bash.
This causes bash specific features like the source command that's used to run a script that doesn't have executable permission to fail (like source venv/bin/activate that's common in python projects)

Expected Behaviour:
Since we already use /bin/bash in the interactive shell, we could use /bin/bash for non-interactive mode script execution as well.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.