Git Product home page Git Product logo

source-to-image's Introduction

Go Reference License

Source-To-Image (S2I)

Overview

Source-to-Image (S2I) is a toolkit and workflow for building reproducible container images from source code. S2I produces ready-to-run images by injecting source code into a container image and letting the container prepare that source code for execution. By creating self-assembling builder images, you can version and control your build environments exactly like you use container images to version your runtime environments.

For a deep dive on S2I you can view this presentation.

Want to try it right now? Download the latest release and run:

$ s2i build https://github.com/sclorg/django-ex centos/python-35-centos7 hello-python
$ docker run -p 8080:8080 hello-python

Now browse to http://localhost:8080 to see the running application.

You've just built and run a new container image from source code in a git repository, no Dockerfile necessary.

How Source-to-Image works

For a dynamic language like Ruby, the build-time and run-time environments are typically the same. Starting with a builder image that describes this environment - with Ruby, Bundler, Rake, Apache, GCC, and other packages needed to set up and run a Ruby application installed - source-to-image performs the following steps:

  1. Start a container from the builder image with the application source injected into a known directory
  2. The container process transforms that source code into the appropriate runnable setup - in this case, by installing dependencies with Bundler and moving the source code into a directory where Apache has been preconfigured to look for the Ruby config.ru file.
  3. Commit the new container and set the image entrypoint to be a script (provided by the builder image) that will start Apache to host the Ruby application.

For compiled languages like C, C++, Go, or Java, the dependencies necessary for compilation might dramatically outweigh the size of the actual runtime artifacts. To keep runtime images slim, S2I enables a multiple-step build processes, where a binary artifact such as an executable or Java WAR file is created in the first builder image, extracted, and injected into a second runtime image that simply places the executable in the correct location for execution.

For example, to create a reproducible build pipeline for Tomcat (the popular Java webserver) and Maven:

  1. Create a builder image containing OpenJDK and Tomcat that expects to have a WAR file injected
  2. Create a second image that layers on top of the first image Maven and any other standard dependencies, and expects to have a Maven project injected
  3. Invoke source-to-image using the Java application source and the Maven image to create the desired application WAR
  4. Invoke source-to-image a second time using the WAR file from the previous step and the initial Tomcat image to create the runtime image

By placing our build logic inside of images, and by combining the images into multiple steps, we can keep our runtime environment close to our build environment (same JDK, same Tomcat JARs) without requiring build tools to be deployed to production.

Goals

Reproducibility

Allow build environments to be tightly versioned by encapsulating them within a container image and defining a simple interface (injected source code) for callers. Reproducible builds are a key requirement to enabling security updates and continuous integration in containerized infrastructure, and builder images help ensure repeatability as well as the ability to swap runtimes.

Flexibility

Any existing build system that can run on Linux can be run inside of a container, and each individual builder can also be part of a larger pipeline. In addition, the scripts that process the application source code can be injected into the builder image, allowing authors to adapt existing images to enable source handling.

Speed

Instead of building multiple layers in a single Dockerfile, S2I encourages authors to represent an application in a single image layer. This saves time during creation and deployment, and allows for better control over the output of the final image.

Security

Dockerfiles are run without many of the normal operational controls of containers, usually running as root and having access to the container network. S2I can be used to control what permissions and privileges are available to the builder image since the build is launched in a single container. In concert with platforms like OpenShift, source-to-image can enable admins to tightly control what privileges developers have at build time.

Anatomy of a builder image

Creating builder images is easy. s2i looks for you to supply the following scripts to use with an image:

  1. assemble - builds and/or deploys the source
  2. run- runs the assembled artifacts
  3. save-artifacts (optional) - captures the artifacts from a previous build into the next incremental build
  4. usage (optional) - displays builder image usage information

Additionally for the best user experience and optimized s2i operation we suggest images to have /bin/sh and tar commands available.

See a practical tutorial on how to create a builder image and read a detailed description of the requirements and scripts along with examples of builder images.

Build workflow

The s2i build workflow is:

  1. s2i creates a container based on the build image and passes it a tar file that contains:
    1. The application source in src, excluding any files selected by .s2iignore
    2. The build artifacts in artifacts (if applicable - see incremental builds)
  2. s2i sets the environment variables from .s2i/environment (optional)
  3. s2i starts the container and runs its assemble script
  4. s2i waits for the container to finish
  5. s2i commits the container, setting the CMD for the output image to be the run script and tagging the image with the name provided.

Filtering the contents of the source tree is possible if the user supplies a .s2iignore file in the root directory of the source repository, where .s2iignore contains regular expressions that capture the set of files and directories you want filtered from the image s2i produces.

Specifically:

  1. Specify one rule per line, with each line terminating in \n.
  2. Filepaths are appended to the absolute path of the root of the source tree (either the local directory supplied, or the target destination of the clone of the remote source repository s2i creates).
  3. Wildcards and globbing (file name expansion) leverage Go's filepath.Match and filepath.Glob functions.
  4. Search is not recursive. Subdirectory paths must be specified (though wildcards and regular expressions can be used in the subdirectory specifications).
  5. If the first character is the # character, the line is treated as a comment.
  6. If the first character is the !, the rule is an exception rule, and can undo candidates selected for filtering by prior rules (but only prior rules).

Here are some examples to help illustrate:

With specifying subdirectories, the */temp* rule prevents the filtering of any files starting with temp that are in any subdirectory that is immediately (or one level) below the root directory. And the */*/temp* rule prevents the filtering of any files starting with temp that are in any subdirectory that is two levels below the root directory.

Next, to illustrate exception rules, first consider the following example snippet of a .s2iignore file:

*.md
!README.md

With this exception rule example, README.md will not be filtered, and remain in the image s2i produces. However, with this snippet:

!README.md
*.md

README.md, if filtered by any prior rules, but then put back in by !README.md, would be filtered, and not part of the resulting image s2i produces. Since *.md follows !README.md, *.md takes precedence.

Users can also set extra environment variables in the application source code. They are passed to the build, and the assemble script consumes them. All environment variables are also present in the output application image. These variables are defined in the .s2i/environment file inside the application sources. The format of this file is a simple key-value, for example:

FOO=bar

In this case, the value of FOO environment variable will be set to bar.

Using ONBUILD images

In case you want to use one of the official Dockerfile language stack images for your build you don't have do anything extra. S2I is capable of recognizing the container image with ONBUILD instructions and choosing the OnBuild strategy. This strategy will trigger all ONBUILD instructions and execute the assemble script (if it exists) as the last instruction.

Since the ONBUILD images usually don't provide any entrypoint, in order to use this build strategy you will have to provide one. You can either include the 'run', 'start' or 'execute' script in your application source root folder or you can specify a valid S2I script URL and the 'run' script will be fetched and set as an entrypoint in that case.

Incremental builds

s2i automatically detects:

  • Whether a builder image is compatible with incremental building
  • Whether a previous image exists, with the same name as the output name for this build

If a save-artifacts script exists, a prior image already exists, and the --incremental=true option is used, the workflow is as follows:

  1. s2i creates a new container image from the prior build image
  2. s2i runs save-artifacts in this container - this script is responsible for streaming out a tar of the artifacts to stdout
  3. s2i builds the new output image:
    1. The artifacts from the previous build will be in the artifacts directory of the tar passed to the build
    2. The build image's assemble script is responsible for detecting and using the build artifacts

NOTE: The save-artifacts script is responsible for streaming out dependencies in a tar file.

Dependencies

  1. docker >= 1.6
  2. Go >= 1.7.1
  3. (optional) Git

Installation

Using go install

You can install the s2i binary using go install which will download the source-to-image code to your Go module cache, build the s2i binary, and install it into your $GOBIN, or $GOPATH/bin if $GOBIN is not set, or $HOME/go/bin if the GOPATH environment variable is also not set.

$ go install github.com/openshift/source-to-image/cmd/s2i@latest

For Mac

You can either follow the installation instructions for Linux (and use the darwin-amd64 link) or you can just install source-to-image with Homebrew:

$ brew install source-to-image

For Linux

Go to the releases page and download the correct distribution for your machine. Choose either the linux-386 or the linux-amd64 links for 32 and 64-bit, respectively.

Unpack the downloaded tar with

$ tar -xvzf release.tar.gz.

You should now see an executable called s2i. Either add the location of s2i to your PATH environment variable, or move it to a pre-existing directory in your PATH. For example,

# cp /path/to/s2i /usr/local/bin

will work with most setups.

For Windows

Download the latest 64-bit Windows release. Extract the zip file through a file browser. Add the extracted directory to your PATH. You can now use s2i from the command line.

Note: We have had some reports of Windows Defender falsely alerting reporting that the Windows binaries contain "Trojan:Win32/Azden.A!cl". This appears to be a common false alert for other applications as well.

From source

Assuming Go, Git, and Docker are installed and configured, execute the following commands:

$ git clone https://github.com/openshift/source-to-image
$ cd source-to-image
$ export PATH=${PATH}:`pwd`/_output/local/bin/`go env GOOS`/`go env GOHOSTARCH`/
$ ./hack/build-go.sh

Security

Since the s2i command uses the Docker client library, it has to run in the same security context as the docker command. For some systems, it is enough to add yourself into the 'docker' group to be able to work with Docker as 'non-root'. In the latest versions of Fedora/RHEL, it is recommended to use the sudo command as this way is more auditable and secure.

If you are using the sudo docker command already, then you will have to also use sudo s2i to give S2I permission to work with Docker directly.

Be aware that being a member of the 'docker' group effectively grants root access, as described here.

Getting Started

You can start using s2i right away (see releases) with the following test sources and publicly available images:

$ s2i build https://github.com/openshift/ruby-hello-world registry.redhat.io/ubi8/ruby-27 test-ruby-app
$ docker run --rm -i -p :8080 -t test-ruby-app
$ s2i build --ref=10.x --context-dir=helloworld https://github.com/wildfly/quickstart openshift/wildfly-101-centos7 test-jee-app
$ docker run --rm -i -p 8080:8080 -t test-jee-app

Want to know more? Read the following resources:

source-to-image's People

Contributors

adambkaplan avatar asottile avatar bparees avatar coreydaley avatar csrwng avatar cuppett avatar danmcp avatar ewolinetz avatar gabemontero avatar guangxuli avatar jhadvig avatar jupierce avatar kraman avatar matejvasek avatar mfojtik avatar nalind avatar openshift-bot avatar openshift-ci[bot] avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar otaviof avatar php-coder avatar pmorie avatar red-hat-konflux[bot] avatar rhcarvalho avatar smarterclayton avatar soltysh avatar vbehar avatar xgwang-zte avatar yselkowitz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

source-to-image's Issues

STI is not extendable

Rightnow the STI source code is very 'sti-centric'. If I want to add my custom build type (heroku build-pack, generic Docker build, etc...) I can't easily plug this into the existing STI codebase. We should have better interfaces and have factories for this:

type Builder interface {
  Build()
}

type Downloader interface {
  Download()
}

What we have right now is something like:

type buildHandlerInterface interface {
    cleanup()
    setup(required []api.Script, optional []api.Script) error
    determineIncremental() error
    Request() *api.Request
    Result() *api.Result
    saveArtifacts() error
    fetchSource() error
    execute(command api.Script) error
    wasExpectedError(text string) bool
    build() error
}

and so on... the STI will be still 'default' build type, but we can also support different build models in the future.

This issue requires massive refactoring, so I don't think we can make it real before GA, but I want to keep this issue opened here that we don't forget about it.

Remove the 'sti' from the package namespace

Is it really necessary to have 'sti' in the package name?

What we do now:

import "github.com/openshift/source-to-image/pkg/sti/...."

What we should do:

import "github.com/openshift/source-to-image/pkg/...."

Include STI scripts in image

I would like to include STI scripts inside the image. Seeing as the image and sti scripts will likely be developed together, this seem like a reasonable requirement.

Need to show output of assemble script run in default loglevel

As the output of assemble is most like the most important thing end-users will want to see we should display it by default (or have special option for that --output-assemble?).

In Origin we should show this output, nothing else is important for the end user (if the end user is not debugging issue in STI itself)

Does sti support the building from a war file?

Hi, I want to know if sti support the building from a war file to an image. I only find the examples about building from the source code into an image. But I found in this page : https://github.com/openshift/openshift-pep/blob/master/openshift-pep-013-openshift-3.md
in the " Build " section , it says:
"Example: post a WAR file to a source-to-images build that results in that WAR being deployed"
Could you give me some suggestions that how to build a war file to an image by sti build ? thank you!

Build and run as different users

I would like the built image to run with a more restricted user than the one which performed the build. This could be supported by running assemble as a non-default user, or by changing the image's default user after assembly.

It can be done now with docker run -u, but it would be nice to support this automatically.

`ONBUILD ENTRYPOINT` in base images

Since just recently STI allows using arbitrary images for doing build, the problem arises when one of the base images includes ONBUILD ENTRYPOINT instruction. This results in an image where the .sti/bin/run command will be appended after the ENTRYPOINT. This results in some weird behavior, depending what was specified as the ENTRYPOINT.

Docker caching and incremental build

Hi,
I'm wondering am I holding it wrong or have I found a problem with sti. I have the following images:
Base image -> ruby x version and all other deps like imagemagick and etc. Based on Dockerfile. Built by docker build in seconds.
Test image -> build by STI based on base image. Contains all gems from gemfile.
Staging image -> build by STI based on base image. Excludes test and development gems from gemfile, precompiles assets for staging.
Production image -> build by STI based on base image. Excludes test and development gems from gemfile, precompiles assets for production.

Sti allowed me to make process of building staging and production image significantly faster but at the end of the day I need to push those images to registry(docker hub). And that is where I found a problem.

If I look at our staging/production image I have a bunch of layers from base image and one giant layer (442MB) from sti:

IMAGE               CREATED             CREATED BY                                      SIZE
6b2883b78afc        20 minutes ago      /bin/sh -c tar -C /tmp -xf - && /tmp/scripts/   442.5 MB

I'm ok with pushing this layer once BUT it gets regenerated EVERY time I run it. So even if building of the image is quite fast pushing it to the registry and then pulling it takes more time especially as number of servers grows.
Is there a way to cache this layer somehow and build on top of it ?

Send output to stdout, not stderr

Currently everything is sent to stderr. The console output should be sent to stdout instead. Only errors should be sent to stderr.

Ignoring files

During investigation of a very big image sizes produced by STI I have found that STI ignores completely .gitignore or .dockerignore and puts every folder into the container it is building.
We have some temporarily files which are ignored in both git and dockerignore with significant weight(about 125MB) adding quite a lot of unnecessary data to production image. I cannot find a way to make STI ignore those files. Would be good if STI can read .dockerignore or .gitignore to get this files and do not put them inside the container.

STI should return non-zero exit code upon failure

STI's exit code is always 0. This makes it difficult to determine if the build failed, for example, unless you scrape the invocation's output. I'd recommend that we exit non-zero when appropriate.

git clone, sti create, make results in failure

$ git clone https://github.com/openshift/source-to-image
$ cd source-to-image/
$ make
docker build -t test .
Sending build context to Docker daemon 14.34 kB
Sending build context to Docker daemon 
Step 0 : FROM openshift/base-centos7
# Executing 4 build triggers
Trigger 0, COPY ./.sti/bin/ /usr/local/sti
Step 0 : COPY ./.sti/bin/ /usr/local/sti
 ---> Using cache
Trigger 1, COPY ./contrib/ /opt/openshift
Step 0 : COPY ./contrib/ /opt/openshift
INFO[0000] contrib/: no such file or directory          
Makefile:5: recipe for target 'build' failed
make: *** [build] Error 1

Adding a contrib folder works. If adding the folder as part of create is the solution, I can code this.

Before I do this, have I misunderstood something though?

STI swallows error messages from assemble script

I'm currently building sti-python image and found that STI swallows error messages from assemble script.

$ sti build 3.3/test/klein-test-app/ openshift/python-33-centos7 python-sample-app --forcePull=false
I0330 17:58:31.082271 19058 sti.go:371] ---> Installing application source
I0330 17:58:31.083148 19058 sti.go:371] ---> Building your Python application from source
I0330 17:58:31.083265 19058 sti.go:371] python setup.py install #develop

vs.

$ sti build 3.3/test/klein-test-app/ openshift/python-33-centos7 python-sample-app --forcePull=false --loglevel=1
I0330 17:58:46.910338 19213 sti.go:111] Building python-sample-app
I0330 17:58:46.916831 19213 sti.go:181] Using assemble from image:///usr/local/sti
I0330 17:58:46.916850 19213 sti.go:181] Using run from image:///usr/local/sti
I0330 17:58:46.916856 19213 sti.go:181] Using save-artifacts from image:///usr/local/sti
I0330 17:58:46.916862 19213 sti.go:119] Clean build will be performed
I0330 17:58:46.916867 19213 sti.go:130] Building python-sample-app
I0330 17:58:46.916877 19213 sti.go:313] No .sti/environment provided (no evironment file found in application sources)
I0330 17:58:47.090252 19213 sti.go:371] ---> Installing application source
I0330 17:58:47.091183 19213 sti.go:371] ---> Building your Python application from source
I0330 17:58:47.091352 19213 sti.go:371] python setup.py install #develop
E0330 17:58:47.193323 19213 sti.go:389] error in klein-test-app setup command: ('Invalid module name', 'klein-test-app')
I0330 17:58:47.351302 19213 main.go:202] An error occurred: non-zero (13) exit code from openshift/python-33-centos7

The swallowed line is:

E0330 17:58:47.193323 19213 sti.go:389] error in klein-test-app setup command: ('Invalid module name', 'klein-test-app')

Cannot build

Hi,

I was trying to play a little bit with STI to test some stuff with geard. I didn't manage to make if work in OS X or Fedora 20. When I run sti build test_sources/applications/html pmorie/fedora-mock sti_app this is what I get:

INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): localhost
INFO:sti.cmd.builder:Building new docker image
Traceback (most recent call last):
  File "/usr/local/bin/sti", line 10, in <module>
    sys.exit(main())
  File "/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sti/cmd/builder.py", line 362, in main
    builder.main()
  File "/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sti/cmd/builder.py", line 348, in main
    self.build(working_dir, build_image, source, is_incremental, user, app_image, env_str)
  File "/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sti/cmd/builder.py", line 245, in build
    img = self.build_deployable_image(image_name, build_dir, tag, env_str, incremental_build)
  File "/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sti/cmd/builder.py", line 229, in build_deployable_image
    img, logs = self.docker_client.build(tag=tag, path=context_dir, rm=True)
ValueError: too many values to unpack

I don't really know Python so digging into the code to find out myself isn't that easy. Any help is appreciated, thanks.

repeat build ignores image:// save-artifacts script

From the build log (a previously build test image exists):

$ sti build server accursoft/ghc-network test --loglevel=3
download.go:112] Using image internal scripts from: image://opt/sti/save-artifacts
build.go:80] Clean build will be performed

If I specify the same scripts with -s, it re-uses the build artifacts.

Hide private `Request` fields

Since the extraction of types.go into their own package api there a couple of fields unnecessarily public, this includes:

  • WorkingDir
  • Incremental
  • ExternalRequiredScripts
  • ExternalOptionalScripts

Convert STI_SCRIPTS_URL environment variable into LABEL

Currently, we use ENV STI_SCRIPTS_URL image://... in our Docker images. This variable points into default location for the STI scripts used for the STI build. I think it would be nice to switch this from the ENV variable to LABEL instructions. There are several benefits of doing that:

  • We can namespace (openshift.io/sti-scripts-url)
  • UI would be able to import this and offer user a text box where they can change the default location to a custom one
  • We will have one less ENV variable in Docker images

@bparees @soltysh thoughts?

Make run script optional

Currently the run script is required. I would like it to have optional. This would let it reuse the CMD command which was specified in the image we could possibly extend by creating the builder image.

Imagine the jboss/wildfly image. This is upstream image has already defined a CMD command. I don't see a reason to duplicate it just to satisfy the requirement of having the run command. In my case I would just repeat what I have already defined, but this is unnecessary + adds another file and possibly could confuse people.

Without a run script, the CMD or ENTRYPOINT from the extended image should be used as-is. In case the run script exist - it should override those.

Add better mechanism for `api.Request` validation

Currently api.Request object is validated only here and in all functions creating those objects. As long as STI is used as standalone this is OK, but when you incorporate STI as part of origin then the validation does not happen, but it should.

support "additive" scripts for STI process

Currently, when customizing the STI scripts, one must take the original script and then add to it.

It would be nice if we supported some kind of additive mechnaism. Either by indicating it in the script, or by having a script of the same name with a number (eg: ##-assemble [happens before assemble] and assemble-## [happens after assemble])

Set the WORKDIR to the path where the application code lives

It would be useful to have the WORKDIR set to the path where your application code lives in images created with STI.

That way, it would be possible to call scripts you might have in your code base passing its relative path to docker exec or osc exec.

docker exec -it <container> bash would start in the "right" directory.

If there is no opposition to this idea, I could implement this myself given a pointer to where to start ๐Ÿ˜„
pkg/create/templates/docker.go?

sti hangs in docker pull if tag is invalid

run STI and provide no tag on the image name, for an image repo where there is no "latest" tag, result is STI hangs forever in "pulling image"... seems like it must not be handling an error correctly:

$ docker pull ce-registry.usersys.redhat.com/jboss-webserver3/tomcat8-openshift
FATA[0005] Tag latest not found in repository ce-registry.usersys.redhat.com/jboss-webserver3/tomcat8-openshift

$ sti build https://github.com/bparees/session-app ce-registry.usersys.redhat.com/jboss-webserver3/tomcat8-openshift badout --loglevel=5
I0506 18:54:38.631805 07628 docker.go:173] Pulling image ce-registry.usersys.redhat.com/jboss-webserver3/tomcat8-openshift

(never errors/returns)

Note that if you provide an actual invalid tag, it does immediately error out. Not sure it's actually valid to have a repo with no "latest" tag in it?

$ sti build https://github.com/bparees/session-app ce-registry.usersys.redhat.com/jboss-webserver3/tomcat8-openshift:badtag badout --loglevel=5
I0506 18:55:08.986159 08143 docker.go:173] Pulling image ce-registry.usersys.redhat.com/jboss-webserver3/tomcat8-openshift:badtag
I0506 18:55:12.403179 08143 docker.go:177] An error was received from the PullImage call: Tag badtag not found in repository ce-registry.usersys.redhat.com/jboss-webserver3/tomcat8-openshift

Allow seeing STI logs when doing integration tests

Currently adding -v flag to hack/test-integration.sh shows only the logs from the integration tests itself. Whereas when searching for test error I'd like also to see STI internal which you turn on with --loglevel=3.

Make output image optional

This comes from here: openshift/origin#1119 (diff). Generally speaking the idea is to stress test the building process itself without actually producing/committing image. STI currently requires that image and we should not especially when that PR gets into origin.

run image as part of running source-to-image

It came up in conversation that it might be nice if the sti "program" accepted a flag that would: cause the resulting built Docker image to stay running instead of actually being a created image.

There may be cases where I want to test the result of my STI process locally without having to push the Docker image somewhere.

I guess this flag would essentially cause "run" to be executed immediately after "assemble" inside the instance of the builder image.

It's not a huge time savings, but it could potentially make a few other automation tasks easier... maybe?

Unable to perform Incremental build

When trying to provide incremental build, I get the following result:

......
E0415 20:49:30.409228 17401 tar.go:158] Error reading next tar header: io: read/write on closed pipe
W0415 20:49:31.287679 17401 sti.go:125] Error saving previous build artifacts: timeout waiting for tar stream
......

The whole log of the incremental build is available at http://pastebin.test.redhat.com/276603

environment variables can't contain commas

Since you split the env command-line argument on ',', the environment variable values themselves can't contain commas. I hit this trying to pass "-P some-repo,some-other-repo" to maven via MAVEN_ARGS.

sti build hangs

I am using sti from the command line and in most cases, I notice that it "hungs".

When I am using log level 3 I see that the build output always hungs at exactly the same line.

What is strange though is that when I check the docker logs, I can see that the assemble script has successfully finished and also that the container has exited gracefully. The target image of course is never created.

The last couple of days I've made so many attempts and only two of them were successful (I haven't done anything different).

Cannot redirect output

It seems that it's not possible to redirect output of sti command to somewhere:

$ sti build --loglevel=5 --forcePull=false https://github.com/goldmann/openshift-eap-examples --contextDir=custom-module 4e42b3029b98 test-jee-app > ~/sti/test.log
-bash: /home/goldmann/sti/test.log: No such file or directory

Add support for STI config file to exists in GIT repo

I think this will help to have much cleaner user experience with STI command line, where the user don't need to know all possible options/docker image names/etc to just trigger the STI build.

I'm proposing that this command:

sti build https://github.com/user/repo

will clone the GIT repo, then check if there is 'sti.json' (naming TBD) file and prefill the STI build options automatically (reusing the --use-config logic)

install to _output?

As a go newbie, it took me about half an hour to work out where the sti command was being installed. I eventually found it in _output by reading the hack sources.

Is this normal, or some kind of go convention? Should it be added to the installation instructions?

An error occurred: non-zero (13) exit code from openshift/wildfly-8-centos

when i testing the command : stil build , according to the readme.md I got the follwing error:
sti build git://github.com/bparees/openshift-jee-sample openshift/wildfly-8-centos test-jee-app
I0317 14:39:07.592323 26821 sti.go:111] Building test-jee-app
Cloning into '/tmp/sti424670445/upload/src'...
remote: Counting objects: 29, done.
remote: Compressing objects: 100% (19/19), done.
remote: Total 29 (delta 1), reused 29 (delta 1), pack-reused 0
Receiving objects: 100% (29/29), 22.84 KiB | 12.00 KiB/s, done.
Resolving deltas: 100% (1/1), done.
Checking connectivity... done.
F0317 14:39:55.761126 26821 main.go:201] An error occurred: non-zero (13) exit code from openshift/wildfly-8-centos

anyone could give me some suggestions? thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.