Git Product home page Git Product logo

source-to-image's Issues

STI is not extendable

Rightnow the STI source code is very 'sti-centric'. If I want to add my custom build type (heroku build-pack, generic Docker build, etc...) I can't easily plug this into the existing STI codebase. We should have better interfaces and have factories for this:

type Builder interface {
  Build()
}

type Downloader interface {
  Download()
}

What we have right now is something like:

type buildHandlerInterface interface {
    cleanup()
    setup(required []api.Script, optional []api.Script) error
    determineIncremental() error
    Request() *api.Request
    Result() *api.Result
    saveArtifacts() error
    fetchSource() error
    execute(command api.Script) error
    wasExpectedError(text string) bool
    build() error
}

and so on... the STI will be still 'default' build type, but we can also support different build models in the future.

This issue requires massive refactoring, so I don't think we can make it real before GA, but I want to keep this issue opened here that we don't forget about it.

Does sti support the building from a war file?

Hi, I want to know if sti support the building from a war file to an image. I only find the examples about building from the source code into an image. But I found in this page : https://github.com/openshift/openshift-pep/blob/master/openshift-pep-013-openshift-3.md
in the " Build " section , it says:
"Example: post a WAR file to a source-to-images build that results in that WAR being deployed"
Could you give me some suggestions that how to build a war file to an image by sti build ? thank you!

Convert STI_SCRIPTS_URL environment variable into LABEL

Currently, we use ENV STI_SCRIPTS_URL image://... in our Docker images. This variable points into default location for the STI scripts used for the STI build. I think it would be nice to switch this from the ENV variable to LABEL instructions. There are several benefits of doing that:

  • We can namespace (openshift.io/sti-scripts-url)
  • UI would be able to import this and offer user a text box where they can change the default location to a custom one
  • We will have one less ENV variable in Docker images

@bparees @soltysh thoughts?

sti build hangs

I am using sti from the command line and in most cases, I notice that it "hungs".

When I am using log level 3 I see that the build output always hungs at exactly the same line.

What is strange though is that when I check the docker logs, I can see that the assemble script has successfully finished and also that the container has exited gracefully. The target image of course is never created.

The last couple of days I've made so many attempts and only two of them were successful (I haven't done anything different).

sti hangs in docker pull if tag is invalid

run STI and provide no tag on the image name, for an image repo where there is no "latest" tag, result is STI hangs forever in "pulling image"... seems like it must not be handling an error correctly:

$ docker pull ce-registry.usersys.redhat.com/jboss-webserver3/tomcat8-openshift
FATA[0005] Tag latest not found in repository ce-registry.usersys.redhat.com/jboss-webserver3/tomcat8-openshift

$ sti build https://github.com/bparees/session-app ce-registry.usersys.redhat.com/jboss-webserver3/tomcat8-openshift badout --loglevel=5
I0506 18:54:38.631805 07628 docker.go:173] Pulling image ce-registry.usersys.redhat.com/jboss-webserver3/tomcat8-openshift

(never errors/returns)

Note that if you provide an actual invalid tag, it does immediately error out. Not sure it's actually valid to have a repo with no "latest" tag in it?

$ sti build https://github.com/bparees/session-app ce-registry.usersys.redhat.com/jboss-webserver3/tomcat8-openshift:badtag badout --loglevel=5
I0506 18:55:08.986159 08143 docker.go:173] Pulling image ce-registry.usersys.redhat.com/jboss-webserver3/tomcat8-openshift:badtag
I0506 18:55:12.403179 08143 docker.go:177] An error was received from the PullImage call: Tag badtag not found in repository ce-registry.usersys.redhat.com/jboss-webserver3/tomcat8-openshift

repeat build ignores image:// save-artifacts script

From the build log (a previously build test image exists):

$ sti build server accursoft/ghc-network test --loglevel=3
download.go:112] Using image internal scripts from: image://opt/sti/save-artifacts
build.go:80] Clean build will be performed

If I specify the same scripts with -s, it re-uses the build artifacts.

STI should return non-zero exit code upon failure

STI's exit code is always 0. This makes it difficult to determine if the build failed, for example, unless you scrape the invocation's output. I'd recommend that we exit non-zero when appropriate.

Add support for STI config file to exists in GIT repo

I think this will help to have much cleaner user experience with STI command line, where the user don't need to know all possible options/docker image names/etc to just trigger the STI build.

I'm proposing that this command:

sti build https://github.com/user/repo

will clone the GIT repo, then check if there is 'sti.json' (naming TBD) file and prefill the STI build options automatically (reusing the --use-config logic)

git clone, sti create, make results in failure

$ git clone https://github.com/openshift/source-to-image
$ cd source-to-image/
$ make
docker build -t test .
Sending build context to Docker daemon 14.34 kB
Sending build context to Docker daemon 
Step 0 : FROM openshift/base-centos7
# Executing 4 build triggers
Trigger 0, COPY ./.sti/bin/ /usr/local/sti
Step 0 : COPY ./.sti/bin/ /usr/local/sti
 ---> Using cache
Trigger 1, COPY ./contrib/ /opt/openshift
Step 0 : COPY ./contrib/ /opt/openshift
INFO[0000] contrib/: no such file or directory          
Makefile:5: recipe for target 'build' failed
make: *** [build] Error 1

Adding a contrib folder works. If adding the folder as part of create is the solution, I can code this.

Before I do this, have I misunderstood something though?

Cannot redirect output

It seems that it's not possible to redirect output of sti command to somewhere:

$ sti build --loglevel=5 --forcePull=false https://github.com/goldmann/openshift-eap-examples --contextDir=custom-module 4e42b3029b98 test-jee-app > ~/sti/test.log
-bash: /home/goldmann/sti/test.log: No such file or directory

Ignoring files

During investigation of a very big image sizes produced by STI I have found that STI ignores completely .gitignore or .dockerignore and puts every folder into the container it is building.
We have some temporarily files which are ignored in both git and dockerignore with significant weight(about 125MB) adding quite a lot of unnecessary data to production image. I cannot find a way to make STI ignore those files. Would be good if STI can read .dockerignore or .gitignore to get this files and do not put them inside the container.

Cannot build

Hi,

I was trying to play a little bit with STI to test some stuff with geard. I didn't manage to make if work in OS X or Fedora 20. When I run sti build test_sources/applications/html pmorie/fedora-mock sti_app this is what I get:

INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): localhost
INFO:sti.cmd.builder:Building new docker image
Traceback (most recent call last):
  File "/usr/local/bin/sti", line 10, in <module>
    sys.exit(main())
  File "/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sti/cmd/builder.py", line 362, in main
    builder.main()
  File "/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sti/cmd/builder.py", line 348, in main
    self.build(working_dir, build_image, source, is_incremental, user, app_image, env_str)
  File "/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sti/cmd/builder.py", line 245, in build
    img = self.build_deployable_image(image_name, build_dir, tag, env_str, incremental_build)
  File "/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sti/cmd/builder.py", line 229, in build_deployable_image
    img, logs = self.docker_client.build(tag=tag, path=context_dir, rm=True)
ValueError: too many values to unpack

I don't really know Python so digging into the code to find out myself isn't that easy. Any help is appreciated, thanks.

Remove the 'sti' from the package namespace

Is it really necessary to have 'sti' in the package name?

What we do now:

import "github.com/openshift/source-to-image/pkg/sti/...."

What we should do:

import "github.com/openshift/source-to-image/pkg/...."

Make output image optional

This comes from here: openshift/origin#1119 (diff). Generally speaking the idea is to stress test the building process itself without actually producing/committing image. STI currently requires that image and we should not especially when that PR gets into origin.

support "additive" scripts for STI process

Currently, when customizing the STI scripts, one must take the original script and then add to it.

It would be nice if we supported some kind of additive mechnaism. Either by indicating it in the script, or by having a script of the same name with a number (eg: ##-assemble [happens before assemble] and assemble-## [happens after assemble])

install to _output?

As a go newbie, it took me about half an hour to work out where the sti command was being installed. I eventually found it in _output by reading the hack sources.

Is this normal, or some kind of go convention? Should it be added to the installation instructions?

environment variables can't contain commas

Since you split the env command-line argument on ',', the environment variable values themselves can't contain commas. I hit this trying to pass "-P some-repo,some-other-repo" to maven via MAVEN_ARGS.

Set the WORKDIR to the path where the application code lives

It would be useful to have the WORKDIR set to the path where your application code lives in images created with STI.

That way, it would be possible to call scripts you might have in your code base passing its relative path to docker exec or osc exec.

docker exec -it <container> bash would start in the "right" directory.

If there is no opposition to this idea, I could implement this myself given a pointer to where to start ๐Ÿ˜„
pkg/create/templates/docker.go?

Send output to stdout, not stderr

Currently everything is sent to stderr. The console output should be sent to stdout instead. Only errors should be sent to stderr.

Hide private `Request` fields

Since the extraction of types.go into their own package api there a couple of fields unnecessarily public, this includes:

  • WorkingDir
  • Incremental
  • ExternalRequiredScripts
  • ExternalOptionalScripts

run image as part of running source-to-image

It came up in conversation that it might be nice if the sti "program" accepted a flag that would: cause the resulting built Docker image to stay running instead of actually being a created image.

There may be cases where I want to test the result of my STI process locally without having to push the Docker image somewhere.

I guess this flag would essentially cause "run" to be executed immediately after "assemble" inside the instance of the builder image.

It's not a huge time savings, but it could potentially make a few other automation tasks easier... maybe?

STI swallows error messages from assemble script

I'm currently building sti-python image and found that STI swallows error messages from assemble script.

$ sti build 3.3/test/klein-test-app/ openshift/python-33-centos7 python-sample-app --forcePull=false
I0330 17:58:31.082271 19058 sti.go:371] ---> Installing application source
I0330 17:58:31.083148 19058 sti.go:371] ---> Building your Python application from source
I0330 17:58:31.083265 19058 sti.go:371] python setup.py install #develop

vs.

$ sti build 3.3/test/klein-test-app/ openshift/python-33-centos7 python-sample-app --forcePull=false --loglevel=1
I0330 17:58:46.910338 19213 sti.go:111] Building python-sample-app
I0330 17:58:46.916831 19213 sti.go:181] Using assemble from image:///usr/local/sti
I0330 17:58:46.916850 19213 sti.go:181] Using run from image:///usr/local/sti
I0330 17:58:46.916856 19213 sti.go:181] Using save-artifacts from image:///usr/local/sti
I0330 17:58:46.916862 19213 sti.go:119] Clean build will be performed
I0330 17:58:46.916867 19213 sti.go:130] Building python-sample-app
I0330 17:58:46.916877 19213 sti.go:313] No .sti/environment provided (no evironment file found in application sources)
I0330 17:58:47.090252 19213 sti.go:371] ---> Installing application source
I0330 17:58:47.091183 19213 sti.go:371] ---> Building your Python application from source
I0330 17:58:47.091352 19213 sti.go:371] python setup.py install #develop
E0330 17:58:47.193323 19213 sti.go:389] error in klein-test-app setup command: ('Invalid module name', 'klein-test-app')
I0330 17:58:47.351302 19213 main.go:202] An error occurred: non-zero (13) exit code from openshift/python-33-centos7

The swallowed line is:

E0330 17:58:47.193323 19213 sti.go:389] error in klein-test-app setup command: ('Invalid module name', 'klein-test-app')

An error occurred: non-zero (13) exit code from openshift/wildfly-8-centos

when i testing the command : stil build , according to the readme.md I got the follwing error:
sti build git://github.com/bparees/openshift-jee-sample openshift/wildfly-8-centos test-jee-app
I0317 14:39:07.592323 26821 sti.go:111] Building test-jee-app
Cloning into '/tmp/sti424670445/upload/src'...
remote: Counting objects: 29, done.
remote: Compressing objects: 100% (19/19), done.
remote: Total 29 (delta 1), reused 29 (delta 1), pack-reused 0
Receiving objects: 100% (29/29), 22.84 KiB | 12.00 KiB/s, done.
Resolving deltas: 100% (1/1), done.
Checking connectivity... done.
F0317 14:39:55.761126 26821 main.go:201] An error occurred: non-zero (13) exit code from openshift/wildfly-8-centos

anyone could give me some suggestions? thank you!

Build and run as different users

I would like the built image to run with a more restricted user than the one which performed the build. This could be supported by running assemble as a non-default user, or by changing the image's default user after assembly.

It can be done now with docker run -u, but it would be nice to support this automatically.

Unable to perform Incremental build

When trying to provide incremental build, I get the following result:

......
E0415 20:49:30.409228 17401 tar.go:158] Error reading next tar header: io: read/write on closed pipe
W0415 20:49:31.287679 17401 sti.go:125] Error saving previous build artifacts: timeout waiting for tar stream
......

The whole log of the incremental build is available at http://pastebin.test.redhat.com/276603

Make run script optional

Currently the run script is required. I would like it to have optional. This would let it reuse the CMD command which was specified in the image we could possibly extend by creating the builder image.

Imagine the jboss/wildfly image. This is upstream image has already defined a CMD command. I don't see a reason to duplicate it just to satisfy the requirement of having the run command. In my case I would just repeat what I have already defined, but this is unnecessary + adds another file and possibly could confuse people.

Without a run script, the CMD or ENTRYPOINT from the extended image should be used as-is. In case the run script exist - it should override those.

Allow seeing STI logs when doing integration tests

Currently adding -v flag to hack/test-integration.sh shows only the logs from the integration tests itself. Whereas when searching for test error I'd like also to see STI internal which you turn on with --loglevel=3.

Docker caching and incremental build

Hi,
I'm wondering am I holding it wrong or have I found a problem with sti. I have the following images:
Base image -> ruby x version and all other deps like imagemagick and etc. Based on Dockerfile. Built by docker build in seconds.
Test image -> build by STI based on base image. Contains all gems from gemfile.
Staging image -> build by STI based on base image. Excludes test and development gems from gemfile, precompiles assets for staging.
Production image -> build by STI based on base image. Excludes test and development gems from gemfile, precompiles assets for production.

Sti allowed me to make process of building staging and production image significantly faster but at the end of the day I need to push those images to registry(docker hub). And that is where I found a problem.

If I look at our staging/production image I have a bunch of layers from base image and one giant layer (442MB) from sti:

IMAGE               CREATED             CREATED BY                                      SIZE
6b2883b78afc        20 minutes ago      /bin/sh -c tar -C /tmp -xf - && /tmp/scripts/   442.5 MB

I'm ok with pushing this layer once BUT it gets regenerated EVERY time I run it. So even if building of the image is quite fast pushing it to the registry and then pulling it takes more time especially as number of servers grows.
Is there a way to cache this layer somehow and build on top of it ?

Add better mechanism for `api.Request` validation

Currently api.Request object is validated only here and in all functions creating those objects. As long as STI is used as standalone this is OK, but when you incorporate STI as part of origin then the validation does not happen, but it should.

Include STI scripts in image

I would like to include STI scripts inside the image. Seeing as the image and sti scripts will likely be developed together, this seem like a reasonable requirement.

Need to show output of assemble script run in default loglevel

As the output of assemble is most like the most important thing end-users will want to see we should display it by default (or have special option for that --output-assemble?).

In Origin we should show this output, nothing else is important for the end user (if the end user is not debugging issue in STI itself)

`ONBUILD ENTRYPOINT` in base images

Since just recently STI allows using arbitrary images for doing build, the problem arises when one of the base images includes ONBUILD ENTRYPOINT instruction. This results in an image where the .sti/bin/run command will be appended after the ENTRYPOINT. This results in some weird behavior, depending what was specified as the ENTRYPOINT.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.