Git Product home page Git Product logo

origin's People

Contributors

0xmichalis avatar bparees avatar csrwng avatar danwinship avatar dcbw avatar deads2k avatar derekwaynecarr avatar dgoodwin avatar enj avatar fabianofranz avatar gabemontero avatar ironcladlou avatar jsafrane avatar juanvallejo avatar jwforres avatar liggitt avatar marun avatar mfojtik avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar p0lyn0mial avatar pweil- avatar ramr avatar sdodson avatar sjenning avatar smarterclayton avatar soltysh avatar spadgett avatar stevekuznetsov avatar sttts avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

origin's Issues

Invalid flag handling message needs work

When specifying an invalid flag on the openshift cli, there are two problems:

  1. it reports all flags as invalid, not just the invalid flag
  2. for each invalid flag, it refers the user to the help. it should only do this once, at the end.
  3. it reports an unknown flag, then gives usage, then reports a bunch of invalid flags.

here's an example (only "ectdBADDir" is an invalid flag):
[bparees@bparees ~/git/gocode/src/github.com/openshift/origin/examples/simple-ruby-app (deploy)]$ DOCKER_REGISTRY=localhost:5000 openshift start --listenAddr="0.0.0.0:8080" --volumeDir=mktemp -d --etcdBADDir=mktemp -d
unknown flag: --listenAddr

Usage:
openshift [flags]
openshift [command]

Available Commands:
start Launch in all-in-one mode
kube The Kubernetes command line client
version Display version
help [command] Help about any command

Available Flags:
--help=false: help for openshift

Use "openshift help [command]" for more information about that command.
Error: unknown flag: --listenAddr
Usage of openshift:
--help=false: help for openshift
unknown flag: --volumeDir
Usage of openshift:
--help=false: help for openshift
unknown flag: --etcdBADDir
Usage of openshift:
--help=false: help for openshift

Kubernetes issue while installation

Hi,

I am trying to test this V3 installation in fedora running in Vagrant. I am able to run all the commands till step 4, but facing issue while running 5th command which is to start origin server "output/go/bin/openshift start"

The error logs I am facing with are pasted below, please let me know what could be the issue so that I can test this on my Fedora environment.Thanks

E0828 06:09:35.724207 20545 replication_controller.go:102] Unexpected failure to watch: Get http://127.0.0.1:8080/api/v1beta1/watch/replicationControl
lers?fields=&labels=&resourceVersion=0: dial tcp 127.0.0.1:8080: connection refused
I0828 06:09:35.718782 20545 master.go:139] Started Kubernetes API at http://127.0.0.1:8080/api/v1beta1
I0828 06:09:35.731120 20545 master.go:140] Started OpenShift API at http://127.0.0.1:8080/osapi/v1beta1
E0828 06:09:35.726152 20545 reflector.go:65] failed to watch *api.Pod: Get http://127.0.0.1:8080/api/v1beta1/watch/pods?fields=DesiredState.Host%21%3D
&resourceVersion=0: dial tcp 127.0.0.1:8080: connection refused
E0828 06:09:35.726195 20545 reflector.go:65] failed to watch *api.Pod: Get http://127.0.0.1:8080/api/v1beta1/watch/pods?fields=DesiredState.Host%3D&re
sourceVersion=0: dial tcp 127.0.0.1:8080: connection refused
E0828 06:09:35.726052 20545 poller.go:63] failed to list: Get http://127.0.0.1:8080/api/v1beta1/minions: dial tcp 127.0.0.1:8080: connection refused

Thanks,
Priya

milestone releases of origin v3 on different platforms (say Linux, OS X, Windows)? say every month or so?

so we're in the process of migrating JBoss Fuse and fabric8 2.x to be based on kubernetes/openshift v3.

It'd really help JBoss Fuse and Fabric8 open source project contributors if there were alpha milestones of Origin V3 so folks could download the openshift origin v3 binary like folks can do with etcd and docker. e.g. for mac, windows, linux.

I grok how the openshift product beta's a little way off; I wonder if we could have some origin 'alpha' builds for Origin a little sooner? e.g. just the openshift binary built on a few platforms available as a download?

Many thanks if you can manage this!

Docker-in-Docker build is fragile to loopback exhaustion

I'm seeing this quite commonly - builds with DinD fail because they can't get a loopback device. Is there some obvious cleanup step we are not doing, or is this path just really fragile?

$ docker logs 235fb847e05c
Attempting to create /dev/loop4 for docker ...
Attempting to create /dev/loop5 for docker ...
2014/09/18 08:49:51 docker daemon: 1.1.2 d84a070/1.1.2; execdriver: native; graphdriver: 
[4eab6c59] +job serveapi(unix:///var/run/docker.sock)
[4eab6c59] +job initserver()
[4eab6c59.initserver()] Creating server
[error] attach_loopback.go:42 There are no more loopback devices available.
loopback mounting failed
[4eab6c59] -job initserver() = ERR (1)
2014/09/18 08:49:51 loopback mounting failed
Docker-in-Docker daemon not accessible

Docker build sourceURI with https:// hangs

There is a problem when building an image with either https:// or without any prefix (eg. github.com/user/repo), in the latter docker appends https:// for you. This results in git command waiting for credentials in default setup, which eventually means that our build is running for ever.
We should think of either forcing users to use git@ or git:// prefixes or appending/replacing those?
Almost forgot to add, this is only when you provide a wrong URL.
Opinions needed @csrwng @ncdc ?

image TestCreateRegistrySaveError is flaky

ok      github.com/openshift/origin/pkg/image/registry/etcd 1.094s  coverage: 96.0% of statements
--- FAIL: TestCreateRegistrySaveError (0.00 seconds)
    rest_test.go:134: Expected failure status, got &{{%!V(string=) %!V(string=) {{%!V(int64=0) %!V(uintptr=0) %!V(*time.Location=<nil>)}} %!V(string=) %!V(uint64=0) %!V(string=) %!V(string=)} %!V(string=Failure) %!V(string=test error) %!V(api.StatusReason=) %!V(*api.StatusDetails=<nil>) %!V(int=500)}
FAIL
FAIL    github.com/openshift/origin/pkg/image/registry/image    0.099s
ok      github.com/openshift/origin/pkg/image/registry/imagerepository  1.137s  coverage: 86.5% of statements

openshift start not working for me

Hi,

As a newbie I have probably done something wrong but I have pent some time trying to figure it out without success.

I have followed the steps recommended in #245

I have set the following environment variables:

export DOCKER_HOST=tcp://192.168.59.103:2376
export DOCKER_CERT_PATH=/Users/stevef1/.boot2docker/certs/boot2docker-vm
export DOCKER_TLS_VERIFY=1
export OPENSHIFT_HOST=127.0.0.1
export KUBERNETES_MASTER=http://127.0.0.1:8080
export DOCKER_REGISTRY=192.168.59.103:5000

I am running on a mac (10.9.5) with boot2docker

openshifthost:bin stevef1$ uname -a
Darwin openshifthost 13.4.0 Darwin Kernel Version 13.4.0: Sun Aug 17 19:50:11 PDT 2014; root:xnu-2422.115.4~1/RELEASE_X86_64 x86_64

The errors I am getting on startup are:

openshifthost:bin stevef1$ openshift version
openshift version 0.1, build f8d3cb6
kubernetes v0.3-dev
openshifthost:bin stevef1$ openshift start
I1026 20:08:17.846519 14469 start.go:127] Starting an OpenShift all-in-one, reachable at http://192.168.0.112:8080 (etcd: http://192.168.0.112:4001)
I1026 20:08:17.849127 14469 start.go:141]   Node: openshifthost
I1026 20:08:17.849202 14469 etcd.go:29] Started etcd at 192.168.0.112:4001
[etcd] Oct 26 20:08:17.850 INFO      | openshift.local is starting a new cluster
[etcd] Oct 26 20:08:17.851 INFO      | etcd server [name openshift.local, listen on 0.0.0.0:4001, advertised url http://192.168.0.112:4001]
[etcd] Oct 26 20:08:17.851 INFO      | peer server [name openshift.local, listen on 0.0.0.0:7001, advertised url http://127.0.0.1:7001]
[etcd] Oct 26 20:08:17.851 INFO      | openshift.local starting in peer mode
[etcd] Oct 26 20:08:17.851 INFO      | openshift.local: state changed from 'initialized' to 'follower'.
[etcd] Oct 26 20:08:17.851 INFO      | openshift.local: state changed from 'follower' to 'leader'.
[etcd] Oct 26 20:08:17.851 INFO      | openshift.local: leader changed from '' to 'openshift.local'.
I1026 20:08:17.856145 14469 master.go:74] Started Kubernetes Scheduler
I1026 20:08:17.856173 14469 master.go:65] Started Kubernetes Replication Manager
I1026 20:08:17.856196 14469 controller.go:38] Creating build controller with timeout=2400
I1026 20:08:17.856451 14469 master.go:200] Started Kubernetes API at http://192.168.0.112:8080/api/v1beta1
I1026 20:08:17.856465 14469 master.go:200] Started Kubernetes API at http://192.168.0.112:8080/api/v1beta2
I1026 20:08:17.856470 14469 master.go:200] Started OAuth2 API at http://192.168.0.112:8080/oauth
I1026 20:08:17.856474 14469 master.go:200] Started login server at http://192.168.0.112:8080/login
I1026 20:08:17.856479 14469 master.go:202] Started OpenShift API at http://192.168.0.112:8080/osapi/v1beta1
I1026 20:08:17.856687 14469 master.go:235] Started OpenShift static asset server at http://192.168.0.112:8081
E1026 20:08:17.874433 14469 healthy_registry.go:77] openshifthost failed health check with error: Get http://openshifthost:10250/healthz: dial tcp 192.168.0.112:10250: connection refused
E1026 20:08:17.882790 14469 node.go:51] WARNING: Docker could not be reached at tcp://192.168.59.103:2376.  Docker must be installed and running to start containers.
Get http://192.168.59.103:2376/_ping: malformed HTTP response "\x15\x03\x01\x00\x02\x02"
I1026 20:08:17.882868 14469 node.go:115] Started Kubernetes Proxy on 0.0.0.0
E1026 20:08:17.883145 14469 docker.go:86] Unable to parse Docker config file: unexpected end of JSON input
I1026 20:08:17.883166 14469 node.go:98] Started Kubelet for node openshifthost, server at 0.0.0.0:10250

$boot2docker version
Boot2Docker-cli version: v1.3.0
Git commit: deafc19

openshifthost:bin stevef1$ docker info
Containers: 2
Images: 46
Storage Driver: aufs
 Root Dir: /mnt/sda1/var/lib/docker/aufs
 Dirs: 50
Execution Driver: native-0.2
Kernel Version: 3.16.4-tinycore64
Operating System: Boot2Docker 1.3.0 (TCL 5.4); master : a083df4 - Thu Oct 16 17:05:03 UTC 2014
Debug mode (server): true
Debug mode (client): false
Fds: 20
Goroutines: 22
EventsListeners: 0
Init Path: /usr/local/bin/docker

openshifthost:bin stevef1$ go version
go version go1.2 darwin/amd64

openshifthost:kubernetes stevef1$ docker version
Client version: 1.3.0
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): c78088f
OS/Arch (client): darwin/amd64
Server version: 1.3.0
Server API version: 1.15
Go version (server): go1.3.3
Git commit (server): c78088f

openshifthost:kubernetes stevef1$ boot2docker ssh
                        ##        .
                  ## ## ##       ==
               ## ## ## ##      ===
           /""""""""""""""""\___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ /  ===- ~~~
           \______ o          __/
             \    \        __/
              \____\______/
 _                 _   ____     _            _
| |__   ___   ___ | |_|___ \ __| | ___   ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|   <  __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
boot2docker: 1.3.0
             master : a083df4 - Thu Oct 16 17:05:03 UTC 2014
docker@boot2docker:~$ 

I added these two lines to /etc/hosts:

192.168.0.112 openshifthost
192.168.59.103 dockerhost

Any ideas?

Deployments POC next-steps

This issue collects the list of next steps for the deployments POC:

  • Refactor the controllers to model used by the kubernetes scheduler, which uses the client/cache Reflector, Store, and Queue types.
  • Make correct use of namespaced contexts where appropriate
  • Improve unit and integration test coverage
  • Add appropriate godoc
  • Update API docs

running one of the examples against an OS X build of openshift v3 doesn't seem to work

I just built master of openshift origin V3; ran openshift start.

then I did this based on the README.md

cd examples/guestbook
origin/examples/guestbook/$ openshift kube process -c template.json | openshift kube apply -c -
F1006 18:36:28.464539 52838 kubecfg.go:496] failed to process template: request [&http.Request{Method:"POST", URL:(*url.URL)(0xc2080eabd0), Proto:"HTTP/1.1", ProtoMajor:1, ProtoMinor:1, Header:http.Header{}, Body:ioutil.nopCloser{Reader:(*bytes.Buffer)(0xc2080eaa80)}, ContentLength:4193, TransferEncoding:[]string(nil), Close:false, Host:"127.0.0.1:8080", Form:url.Values(nil), PostForm:url.Values(nil), MultipartForm:(*multipart.Form)(nil), Trailer:http.Header(nil), RemoteAddr:"", RequestURI:"", TLS:(*tls.ConnectionState)(nil)}] failed (404) 404 Not Found: Not Found: "/v1beta1/templateConfigs"
Config.items is empty

Is this a known issue? I've found using vanilla kube examples for pods didn't seem to work either. Wonder if its known or an OS X issue (or just a bad example?)

Should add an introspection endpoint to know which types are under which api paths

Currently a UI will have to know whether any given type is under the /api or the /osapi paths. It would be better to be able to introspect a URL on /osapi that tells us the types and the paths for those types.

There may be other things we want to introspect later. We may also want to add a similar introspection endpoint to upstream kubernetes.

Deployments should track what triggers created them

When a deployment object is created it should also store what triggers resulted in it being created. This information should be as detailed as possible. Example, for a trigger caused by a a new image being pushed to a tag on an image repo, the trigger info on the deployment should list that it was the result of a new image being available, the image id that triggered it, and the image repo and tag that were being tracked that triggered it.

a single openshift 'apply json' command line that can accept templates, configs, pods, services, replication controllers?

Right now the CLI is very different based on whats inside the JSON.

It'd be nice to just have 1 command to apply some JSON. Say....

$ openshift apply -c foo.json

which then applies configs, pods, services, replication controllers (maybe even templates too?). It'd make the CLI a little easier to type too ;).

i.e. the command looks inside the json to see the kind; then delegates to the underlying openshift kube command based on the shape of the json?

Add design document for deployments

Create a design document for deployments under docs/deployments.md (TBD) which describes the deployment system in terms of the actual API that was introduced. Diagrams would be nice.

Update LBManager to use client/cache

The LBManager type in pkg/router/lbmanager should be refactored to use upstream client/cache reflector and FIFO to process Services and Endpoints.

openshift cloud driver in kubernetes

I know we're attempting to use openshift on openshift where, the openshift components are deployed via kubernetes.

The flip side of this is, if you have an openshift 3.0 running and functional, you should be able to deploy a kubernetes pod configuration directly onto a running openshift 3.0 cluster via the kube.sh scripts in kubernetes source tree.. almost similar to azure, gce, but not quite.. in the sense those cloud drivers are IaaS drivers..

a little Meta, i know, but i'd like feedback on this..

Host System self-referencing requires FQDN to resolve (correctly)

Following the quide listed at:

https://blog.openshift.com/openshift-v3-deep-dive-docker-kubernetes/

may result in a non-functional environment, which at the stage of applying the Docker Registry Config would return in an infinite loop attempting to allocate a healthy Docker host(?).

After a little investigation, error messages disclose:

E1027 13:17:38.626580 04954 pod_cache.go:84] Error synchronizing container: Get http://:10250/podInfo?podID=deploy-docker-registry: dial tcp :10250: connection refused

as well as:

E1027 13:26:21.350905 05232 healthy_registry.go:77] some.domain.tld failed health check with error: Get http://some.domain.tld:10250/healthz: dial tcp w.x.y.z:10250: i/o timeout

where some.domain.tld should not be resolving to an w.x.y.z IP address.

While it is (in hindsight) abbundantly clear why system's FQDNs should properly resolve, it is not mentioned anywhere in the docs, and other utilities and configuration seem to all be using IP addresses -- so this one component requiring proper DNS resolving in order to self-reference was a little bit surprising.

whats the canonical way to wire the api server REST endpoint into pods/containers via services or env vars?

So we've a console in fabric8 that works off the kubernetes/openshift REST API to view whats running etc.

I can imagine a few other containers needing access (e.g. management / monitoring containers, for activemq clusters to view all the containers present so they can store/forward between them or custom auto-scalers etc).

Right now we've hard coded a host/port into our config.json which feels kinda dirty ;).
https://github.com/fabric8io/fabric8/blob/2.0/apps/fabric8.json#L32

I wonder is there a more canonical way to make it available to containers without having to make everything a template? e.g. via an environment variable (like we can use to find the services) or as a kind of kubernetes service (if there was maybe a magic port number that kube-proxy could wire to)?

Is there already some way to do this; or is it intended for JSON files to be specific to a single openshift environment - so hard coding of the API Server is OK?

Use a type field in all union types

A number of union types in the API rely on pointer nilness inference to determine the kind of the type instance, for example:

type BuildInput struct {
  // DockerBuild represents build parameters specific to docker build
  DockerInput *DockerBuildInput `json:"dockerInput,omitempty" yaml:"dockerInput,omitempty"`

  // STIBuild represents build parameters specific to STI build
  STIInput *STIBuildInput `json:"stiInput,omitempty" yaml:"stiInput,omitempty"`
}

Instead there should be a type field in these types:

type BuildInputType string

const (
    BuildInputTypeDocker BuildInputType = "docker"
    BuildInputTypeSTI BuildInputType = "sti"
)

Services not always reachable on first request

Running the example/simple-ruby-demo I frequently see failures on the first request to a service (both when the build attempts to push an image to the docker registry, which is the first call to the registry, and when you attempt to curl the example application at the end).

It's not consistent, and the problem always goes away after the first request... as you can see i've added logic to work around this here:
https://github.com/bparees/origin/blob/deploy/examples/simple-ruby-app/run.sh#L50

But we need to investigate this. It's more likely it's an issue in k8s but I wanted to start it here before causing a firedrill.

@markturansky, @smarterclayton suggested you as someone who could investigate this...ping me if you need more info on how to recreate it.

Port conflict results in infinite loop & high CPU

I had a jetty server running on port 8080; when starting the v3 code there is a port conflict w/ kubernetes. Instead of failing gracefully it results in an infinite loop and high CPU. Eventually the process gets killed.

On startup, the v3 code should instead display an error and exit gracefully.

Error scheduling $id: The assignment would cause a constraint violation; retrying

on OS X if I stop OpenShift V3 and clear the local files and docker containers, restart it:

killall openshift
rm -rf openshift.local.*
docker kill $(docker ps -q)
openshift start --listenAddr=$OPENSHIFT_HOST:8080

and then run the guestbook example:

cd examples/guestbook
openshift kube process -c template.json | openshift kube apply -c -

I get these errors alot in the log....

E1013 10:18:24.031843 81093 factory.go:162] Error scheduling 91d03f58-52b9-11e4-98bc-3c0754574d78: The assignment would cause a constraint violation; retrying

I've seen these errors a bit running various stuff in the past. I found if I trashed my boot2docker vm and recreated it they went away so I guess there's something else that needs clearing down somewhere?

FWIW it'd be nice to log the actual constraint violation too ;)

I took a quick peek into the code; I wondered if this was the constraint thats being violated:
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/constraint/ports.go#L33

if so, it'd be very handy to log the actual host port thats causing the issue. (In most examples there's quite a few ports in use; its hard to know which one is causing the issue).

As to why a port is conflicting I'm less sure; I guess something seems to think a port is still assigned after restarting openshift?

Builders POC

As a follow up discussions from #115 and #140 I'm creating this issue to gather requirements and ideas regarding our builder scripts.

Overview

Currently builder images (see docker-builder and sti-builder) contain single script responsible for doing builds. The build process itself consists of the following steps:

  1. Check if docker is available.
  2. Build the image.
  3. Optionally push the result image to repository.

Requirements

By now we've identified a couple of requirements to make this mechanism more reliable and what's more important, user friendly:

  • reliable more than bash and curl (see current solution)
  • easy to modify & extend, production ready
  • validate context URL for docker build before starting actual build (see #115)
  • provide readable logs describing validation and build process

Possible solutions

For now I'm proposing to use something similar to what we already have in kube-deploy. Rationale for this being go language with docker client, already used in the project.

Switch to Godeps

Upstream has integrated Godeps - we need to have a script (maybe build-go.sh) that checks that the user has the openshift/kubernetes fork as a remote in godeps and that it can be checked out based on our fork commit version (which requires at least a fetch).

Change default behaviors for deployment triggers

Currently there is a manual trigger type, and you must specify that a config change trigger should be automatic. This issue is to:

  1. Change the behavior of the config change trigger such that the default is automatic. You should have to opt-out of automatic config change deployments instead of opting into them.

is there a way for OpenShift running on an OS X host to see the pod IPs created via docker (using boot2docker)?

So I can build OpenShift on my OS X box and run the integration tests (hack/test-cmd.sh) and things work pretty well; docker images, pods, services, replication controllers are all created etc. Yay! :)

However I get output like this in the openshift console: #180 (comment) since the host cannot ping the pod's IP (though I can ping it from inside boot2docker).

I wonder if we can figure out a way of making some kind of network on OS X (or IP address range configuration) so we can see pod IP addresses when running on platforms like OS X; then from the host we'd be able to see the pod IPs addresses (and it'd avoid those errors in the OpenShift console).

I'm thinking here of developers who want to just develop on their host and work with a stand alone OpenShift directly

client.JSONPrinter{} should not sort the JSON items

client.JSONPrinter{}, as used in openshift kube process -c template.json command, should preserve order of the template's JSON items for better readability.

Seems like the used json.MarshalIndent() method https://github.com/openshift/origin/blob/master/pkg/cmd/client/json_printer.go#L20 sorts the items automatically. We need to either reimplement this method or figure out how to turn off the sorting.

Also, I'd expect the '\n' character at the end of the result JSON output.

/cc @mfojtik @bparees

test-cmd.sh returning errors in Travis

kube(imageRepositoryMappings): ok
I0911 19:22:25.266549 81649 log.go:134] GET /version: (15.812us) 200
The resource Service is not a known type - unable to create frontend
The resource Service is not a known type - unable to create redismaster
The resource Service is not a known type - unable to create redisslave
The resource Pod is not a known type - unable to create redis-master-2
The resource ReplicationController is not a known type - unable to create frontendController
The resource ReplicationController is not a known type - unable to create redisSlaveController
kube(config): ok

Connection refused - Health check

If I run openshift after doing a fresh build of master branch, I see this error message. Is there a workaround ?

E1022 08:57:13.352381 19373 pod_cache.go:84] Error synchronizing container: Get http://openshifthost:10250/podInfo?podID=hello-openshift: dial tcp 192.168.1.80:10250: connection refused
E1022 08:57:13.352930 19373 healthy_registry.go:77] openshifthost failed health check with error: Get http://openshifthost:10250/healthz: dial tcp 192.168.1.80:10250: connection refused

Add api.Context to client

Every API call should take an api.Context parameter as the first argument to comply with the upstream client pattern and enable namespace support for OpenShift resources.

Missing Godoc for several pkgs

https://godoc.org/github.com/openshift/origin

  • pkg/build/api
  • pkg/build/api/v1beta1
  • pkg/build/api/validation
  • pkg/build/registry/build
  • pkg/build/registry/buildconfig
  • pkg/build/registry/etcd
  • pkg/build/registry/test
  • pkg/build/strategy
  • pkg/client
  • pkg/cmd/client/api
  • pkg/cmd/client/build
  • pkg/cmd/client/image
  • pkg/cmd/util/docker
  • pkg/config
  • pkg/config/api/v1beta1
  • pkg/deploy/api
  • pkg/deploy/api/v1beta1
  • pkg/deploy/api/validation
  • pkg/deploy/client
  • pkg/deploy/registry/deploy
  • pkg/deploy/registry/deployconfig
  • pkg/deploy/registry/etcd
  • pkg/deploy/registry/test
  • pkg/image/api
  • pkg/image/api/v1beta1
  • pkg/image/api/validation
  • pkg/image/registry/etcd
  • pkg/image/registry/image
  • pkg/image/registry/imagerepository
  • pkg/image/registry/imagerepositorymapping
  • pkg/image/registry/test
  • pkg/template/api/v1beta1

Project does not build - rest.go:53: undefined: apiserver.CreateOrUpdate

When I try to build the master branch (commit : 92b5b47), the following error is reported :

 # github.com/openshift/origin/pkg/client
_output/go/src/github.com/openshift/origin/pkg/client/user.go:30: c.RESTClient.Put().Path("userIdentityMappings").Path("-").Body(mapping).Do().IntoCreated undefined (type client.Result has no field or method IntoCreated)
# github.com/openshift/origin/pkg/user/registry/useridentitymapping
_output/go/src/github.com/openshift/origin/pkg/user/registry/useridentitymapping/rest.go:53: undefined: apiserver.CreateOrUpdate

[Question] How to configure minions?

Hi guys, can you help me understanding how to configure Kubernetes layer to define minions?

I have tried setting MINION_ADDRESS without success.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.