Git Product home page Git Product logo

cli's People

Contributors

aelmehdi avatar benmoss avatar dependabot-preview[bot] avatar dependabot[bot] avatar drnic avatar ekcasey avatar ericbottard avatar fbiville avatar glyn avatar jchesterpivotal avatar jldec avatar joshrider avatar making avatar markfisher avatar matthewmcnew avatar mcowger avatar scothis avatar spring-operator avatar trisberg avatar zhitongliu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

cli's Issues

Change --env-from to be --env-value-from

If we change --env-from to be --env-value-from then this would better match the Kubernetes API:

- env:
  - name: MY_SECRET_VALUE
    valueFrom:
      secretKeyRef:
        key: key-in-secret
        name: my-secret-name

This is part of the suggested changes in #39

Gracefully handle flapping Ready condition when tailing after create

When using --tail with a create command, the cli watches the created resource for it's Ready condition to transition from Unknown to either True or False. This normally works great, however, sometimes the condition will flap to a different state before settling on its "final" state, resulting in the CLI exiting with an incorrect status.

We should introduce a delay (between 1 and 5 seconds) from when we first see the condition change to verify that it is stable and not a transient status. This has the fringe benefit of giving more time for lagging logs to be captured.

I'm a bit hesitant to introduce this change given that it will introduce more complexity into an area of the codebase that already experiences deadlocks at test time (see #68)

Lookup each namespace in riff doctor

The riff doctor command currently lists namespaces and then checks to see if that listing contains certain namespaces. This will break once there are enough namespaces to cause the listing to paginate. Instead of using the client's List method, we should call Get for each namespace we want to check.

Optimize resource access checks in `riff doctor`

Currently, riff issues 1 SelfSubjectAccessReview per action verb per resource, which amounts to more than 50 whooping requests being sent to the cluster.

This could be replaced with a single SelfSubjectRulesReview.

CLI - system compatibility warning

As a riff user I would like a warning when i try to use an older version of the CLI on a cluster with a newer set of riff system CRDs and controllers, especially after any additions or changes to those interfaces.

Support kustomize for overlays with `riff * create` commands

Kustomize is a common way to apply extra behavior to a resource before applying it to the API Server. Kubectl does this with kubectl create ... -k kustomization_dir. We can add the same behavior for riff * create commands.

This behavior may be particularly useful for situations where a defining a resource entirely from cli switches is useful, or if there is a comment element that should be reused across multiple resources.

See https://kustomize.io

Allow CLI to be built with specific runtimes

It may be desirable for the CLI to not support a given runtime. We should expose an ldflag that is able to enable/disable each runtime. Custom builds of riff can then pick and choose which runtimes to integrate.

By default for 0.4, we should turn on the core and knative runtimes and turn off the streaming runtime.

[testing] command_table panics with some k8s objects

Currently, the objects are hashed as follows (see pkg/testing/command_table.go):

func objKey(o runtime.Object) string {
	on := o.(kmeta.Accessor)
	// namespace + name is not unique, and the tests don't populate k8s kind
	// information, so use GoLang's type name as part of the key.
	return path.Join(reflect.TypeOf(o).String(), on.GetNamespace(), on.GetName())
}

This is not guaranteed to work as runtime.Object does not implement kmeta.Accessor (lots of runtime.Object subtypes however do).
Case in point: I need to set up some *APIResourceList instances in GivenObjects but this is currently not possible as *APIResourceList, which is a runtime.Object, does not implement kmeta.Accessor.

Make the use of environment variables for handler more like core k8s

In the k8s PodSpec there are two ways of setting the env vars env and envFrom. The latter is used like this:

      envFrom:
      - secretRef:
           name: my-secret
         prefix: mydb

and this results in all keys in the configMap/secret being added to the environment with the provided prefix if that was specified. See Configure all key-value pairs in a Secret as container environment variables.

We don't have a way of supporting this right now. We should adjust our --env and --envFrom support to better match this.

Suggestion:

  • Keep what we have for --env

    example:

    --env MY_VAR=my-value

    results in:

    - env:
      - name: MY_VAR
        value: my-value
    
  • Change --env-from to be --env-value-from

    examples:

    --env-value-from MY_SECRET_VALUE=secretKeyRef:my-secret-name:key-in-secret

    results in:

    - env:
      - name: MY_SECRET_VALUE
        valueFrom:
          secretKeyRef:
            key: key-in-secret
            name: my-secret-name
    

    --env-value-from MY_CONFIG_MAP_VALUE=configMapKeyRef:my-config-map-name:key-in-config-map

    results in:

    - env:
      - name: MY_CONFIG_MAP_VALUE
        valueFrom:
          configMapKeyRef:
            key: key-in-config-map
            name: my-config-map-name
    
  • Add support for envFrom by adding a --env-from-source flag

    examples:

    --env-from-source configMapRef:my-config-map-name

    results in:

    - envFrom:
      - configMapRef:
          name: my-config-map-name
    

    --env-from-source prefix_:secretRef:my-secret-name

    results in:

    - envFrom:
      - prefix: prefix_
        secretRef:
          name: my-secret-name
    

riff streaming processor tail ... shows logs for both containers

The tail command for a streaming processor is very useful/convenient for say observing messages on a stream, but it currently mixes the logs from both containers.

It would be helpful if it could default to showing only the log from the user/function container. One "fix" might be to make the logs from the processor container opt-in only, e.g. with a -v --verbose flag when the processor is created.

$ riff streaming processor tail echo-out
default/echo-out-processor-96mrv-6b578cf754-5pg84[processor]: ACKing default_out for group echo-out: offset=813, part=0
default/echo-out-processor-96mrv-6b578cf754-5pg84[processor]: ACKing default_out for group echo-out: offset=814, part=0
default/echo-out-processor-96mrv-6b578cf754-5pg84[processor]: ACKing default_out for group echo-out: offset=815, part=0
default/echo-out-processor-96mrv-6b578cf754-5pg84[function]: echo 156 squared = 24336
default/echo-out-processor-96mrv-6b578cf754-5pg84[processor]: ACKing default_out for group echo-out: offset=816, part=0
default/echo-out-processor-96mrv-6b578cf754-5pg84[function]: echo 398 squared = 158404
default/echo-out-processor-96mrv-6b578cf754-5pg84[function]: echo 997 squared = 994009

Introduce "status" commands for riff resources

It would be nice to be able to see the status of a newly created resource without having to use kubectl describe.

We should add riff <resource> status <name> commands where the resource could be function or application.

Deadlock in github.com/projectriff/riff/pkg/k8s.TestWaitUntilReady

https://dev.azure.com/projectriff/projectriff/_build/results?buildId=1511&view=logs&jobId=2ccaf87a-3ada-5e59-2fbc-52fdec550bf7&taskId=082b1770-da04-54e1-54f6-77a04cd03fbe&lineStart=266&lineEnd=267&colStart=1&colEnd=1

2019-06-12T12:23:55.6514040Z === RUN   TestWaitUntilReady/transitions_true
2019-06-12T12:23:55.6514370Z SIGQUIT: quit
2019-06-12T12:23:55.6514590Z PC=0x7fff645bfa16 m=0 sigcode=0
2019-06-12T12:23:55.6514770Z 
2019-06-12T12:23:55.6515110Z goroutine 0 [idle]:
2019-06-12T12:23:55.6515370Z runtime.pthread_cond_wait(0x3fe4cc8, 0x3fe4c88, 0x7ffe00000000)
2019-06-12T12:23:55.6515860Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/runtime/sys_darwin.go:357 +0x3b
2019-06-12T12:23:55.6516170Z runtime.semasleep(0xffffffffffffffff, 0x0)
2019-06-12T12:23:55.6516380Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/runtime/os_darwin.go:63 +0x85
2019-06-12T12:23:55.6516700Z runtime.notesleep(0x3fe4a88)
2019-06-12T12:23:55.6517310Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/runtime/lock_sema.go:167 +0xe0
2019-06-12T12:23:55.6517620Z runtime.stopm()
2019-06-12T12:23:55.6518110Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/runtime/proc.go:1936 +0xc1
2019-06-12T12:23:55.6518730Z runtime.findrunnable(0xc000052a00, 0x0)
2019-06-12T12:23:55.6519010Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/runtime/proc.go:2399 +0x530
2019-06-12T12:23:55.6519350Z runtime.schedule()
2019-06-12T12:23:55.6519620Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/runtime/proc.go:2525 +0x20e
2019-06-12T12:23:55.6519870Z runtime.park_m(0xc00008a480)
2019-06-12T12:23:55.6520220Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/runtime/proc.go:2605 +0xa1
2019-06-12T12:23:55.6520470Z runtime.mcall(0x1094c9b)
2019-06-12T12:23:55.6520810Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/runtime/asm_amd64.s:299 +0x5b
2019-06-12T12:23:55.6521060Z 
2019-06-12T12:23:55.6521290Z goroutine 1 [chan receive, 9 minutes]:
2019-06-12T12:23:55.6521640Z testing.(*T).Run(0xc000634000, 0x2c5512a, 0x12, 0x2cdf1d8, 0x1)
2019-06-12T12:23:55.6521900Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/testing/testing.go:917 +0x6d2
2019-06-12T12:23:55.6522240Z testing.runTests.func1(0xc000634000)
2019-06-12T12:23:55.6522520Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/testing/testing.go:1157 +0xa9
2019-06-12T12:23:55.6522860Z testing.tRunner(0xc000634000, 0xc0005f9d48)
2019-06-12T12:23:55.6523130Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/testing/testing.go:865 +0x164
2019-06-12T12:23:55.6523390Z testing.runTests(0xc0000cc260, 0x3fcbe80, 0x2, 0x2, 0x0)
2019-06-12T12:23:55.6523750Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/testing/testing.go:1155 +0x524
2019-06-12T12:23:55.6524000Z testing.(*M).Run(0xc000155b00, 0x0)
2019-06-12T12:23:55.6524330Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/testing/testing.go:1072 +0x2ec
2019-06-12T12:23:55.6524600Z main.main()
2019-06-12T12:23:55.6524930Z 	_testmain.go:96 +0x335
2019-06-12T12:23:55.6525190Z 
2019-06-12T12:23:55.6525420Z goroutine 19 [chan receive]:
2019-06-12T12:23:55.6525760Z github.com/golang/glog.(*loggingT).flushDaemon(0x3fe3740)
2019-06-12T12:23:55.6526030Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/github.com/golang/glog/glog.go:882 +0xae
2019-06-12T12:23:55.6526400Z created by github.com/golang/glog.init.0
2019-06-12T12:23:55.6526670Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/github.com/golang/glog/glog.go:410 +0x31d
2019-06-12T12:23:55.6526910Z 
2019-06-12T12:23:55.6527250Z goroutine 5 [runnable]:
2019-06-12T12:23:55.6527500Z sync.(*RWMutex).Lock(0x4001650)
2019-06-12T12:23:55.6527840Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/sync/rwmutex.go:87 +0xea
2019-06-12T12:23:55.6528120Z go.opencensus.io/stats/view.(*worker).reportView(0xc00012c0f0, 0xc00012a6c0, 0xbf385996e3efebb0, 0x8bba205a41, 0x3fe3480)
2019-06-12T12:23:55.6528480Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/go.opencensus.io/stats/view/worker.go:231 +0x354
2019-06-12T12:23:55.6528780Z go.opencensus.io/stats/view.(*worker).reportUsage(0xc00012c0f0, 0xbf385996e3efebb0, 0x8bba205a41, 0x3fe3480)
2019-06-12T12:23:55.6529260Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/go.opencensus.io/stats/view/worker.go:240 +0x11a
2019-06-12T12:23:55.6529590Z go.opencensus.io/stats/view.(*worker).start(0xc00012c0f0)
2019-06-12T12:23:55.6529860Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/go.opencensus.io/stats/view/worker.go:158 +0x21d
2019-06-12T12:23:55.6530330Z created by go.opencensus.io/stats/view.init.0
2019-06-12T12:23:55.6530590Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/go.opencensus.io/stats/view/worker.go:32 +0x9a
2019-06-12T12:23:55.6531000Z 
2019-06-12T12:23:55.6531290Z goroutine 8 [chan receive, 10 minutes]:
2019-06-12T12:23:55.6531680Z testing.(*T).Run(0xc000634500, 0x2c53103, 0x10, 0xc000692040, 0x2)
2019-06-12T12:23:55.6532010Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/testing/testing.go:917 +0x6d2
2019-06-12T12:23:55.6532570Z github.com/projectriff/riff/pkg/k8s_test.TestWaitUntilReady(0xc000634500)
2019-06-12T12:23:55.6533000Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/pkg/k8s/wait_test.go:95 +0xe25
2019-06-12T12:23:55.6533300Z testing.tRunner(0xc000634500, 0x2cdf1d8)
2019-06-12T12:23:55.6533920Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/testing/testing.go:865 +0x164
2019-06-12T12:23:55.6534250Z created by testing.(*T).Run
2019-06-12T12:23:55.6534640Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/testing/testing.go:916 +0x69a
2019-06-12T12:23:55.6534940Z 
2019-06-12T12:23:55.6535220Z goroutine 6 [syscall, 10 minutes]:
2019-06-12T12:23:55.6535590Z os/signal.signal_recv(0x1096e41)
2019-06-12T12:23:55.6535900Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/runtime/sigqueue.go:139 +0x9f
2019-06-12T12:23:55.6536300Z os/signal.loop()
2019-06-12T12:23:55.6536620Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/os/signal/signal_unix.go:23 +0x30
2019-06-12T12:23:55.6536920Z created by os/signal.init.0
2019-06-12T12:23:55.6537340Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/os/signal/signal_unix.go:29 +0x4f
2019-06-12T12:23:55.6537610Z 
2019-06-12T12:23:55.6537990Z goroutine 22 [chan receive, 9 minutes]:
2019-06-12T12:23:55.6538310Z github.com/projectriff/riff/pkg/k8s_test.TestWaitUntilReady.func1(0xc000690100)
2019-06-12T12:23:55.6538620Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/pkg/k8s/wait_test.go:112 +0x3b3
2019-06-12T12:23:55.6539030Z testing.tRunner(0xc000690100, 0xc000692040)
2019-06-12T12:23:55.6539330Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/testing/testing.go:865 +0x164
2019-06-12T12:23:55.6539710Z created by testing.(*T).Run
2019-06-12T12:23:55.6540020Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/testing/testing.go:916 +0x69a
2019-06-12T12:23:55.6540370Z 
2019-06-12T12:23:55.6540680Z goroutine 35 [chan receive, 9 minutes]:
2019-06-12T12:23:55.6541660Z k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00079e060, 0xc0007ac000)
2019-06-12T12:23:55.6542770Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/k8s.io/client-go/tools/cache/controller.go:103 +0x42
2019-06-12T12:23:55.6543750Z created by k8s.io/client-go/tools/cache.(*controller).Run
2019-06-12T12:23:55.6544840Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/k8s.io/client-go/tools/cache/controller.go:102 +0xa3
2019-06-12T12:23:55.6545180Z 
2019-06-12T12:23:55.6545550Z goroutine 24 [select, 9 minutes]:
2019-06-12T12:23:55.6546500Z k8s.io/client-go/tools/watch.UntilWithoutRetry(0x2fe0f00, 0xc00069c090, 0x2fb7900, 0xc00079a020, 0xc000686f20, 0x1, 0x1, 0x0, 0x0, 0x0)
2019-06-12T12:23:55.6547500Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/k8s.io/client-go/tools/watch/until.go:75 +0x1c2
2019-06-12T12:23:55.6549230Z k8s.io/client-go/tools/watch.UntilWithSync(0x2fe0f00, 0xc00069c090, 0x2fb8680, 0xc000694180, 0x2fb32c0, 0xc0000c24e0, 0x0, 0xc000686f20, 0x1, 0x1, ...)
2019-06-12T12:23:55.6550480Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/k8s.io/client-go/tools/watch/until.go:131 +0x165
2019-06-12T12:23:55.6550900Z github.com/projectriff/riff/pkg/k8s.WaitUntilReady(0x2fe0f00, 0xc00069c090, 0x30039a0, 0x0, 0x2c4dda9, 0xc, 0x3033b40, 0xc0000c24e0, 0x0, 0x0)
2019-06-12T12:23:55.6551240Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/pkg/k8s/wait.go:46 +0x1a7
2019-06-12T12:23:55.6551500Z github.com/projectriff/riff/pkg/k8s_test.TestWaitUntilReady.func1.1(0xc000764720, 0x2fe0f00, 0xc00069c090, 0xc0007499c0, 0xc0000c24e0)
2019-06-12T12:23:55.6551800Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/pkg/k8s/wait_test.go:103 +0x10f
2019-06-12T12:23:55.6552040Z created by github.com/projectriff/riff/pkg/k8s_test.TestWaitUntilReady.func1
2019-06-12T12:23:55.6552240Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/pkg/k8s/wait_test.go:102 +0x2ab
2019-06-12T12:23:55.6552540Z 
2019-06-12T12:23:55.6552800Z goroutine 54 [sync.Cond.Wait, 9 minutes]:
2019-06-12T12:23:55.6553110Z runtime.goparkunlock(...)
2019-06-12T12:23:55.6553330Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/runtime/proc.go:307
2019-06-12T12:23:55.6553910Z sync.runtime_notifyListWait(0xc0007aa028, 0x0)
2019-06-12T12:23:55.6554250Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/runtime/sema.go:510 +0xf9
2019-06-12T12:23:55.6554450Z sync.(*Cond).Wait(0xc0007aa018)
2019-06-12T12:23:55.6554980Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/sync/cond.go:56 +0x8e
2019-06-12T12:23:55.6555870Z k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc0007aa000, 0xc0007a60f0, 0x0, 0x0, 0x0, 0x0)
2019-06-12T12:23:55.6556980Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/k8s.io/client-go/tools/cache/delta_fifo.go:431 +0xaa
2019-06-12T12:23:55.6557930Z k8s.io/client-go/tools/cache.(*controller).processLoop(0xc0007ac000)
2019-06-12T12:23:55.6558990Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/k8s.io/client-go/tools/cache/controller.go:150 +0x84
2019-06-12T12:23:55.6559310Z k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc000645f88)
2019-06-12T12:23:55.6559680Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x62
2019-06-12T12:23:55.6560000Z k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000073f88, 0x3b9aca00, 0x0, 0xc0000c2401, 0xc00079e060)
2019-06-12T12:23:55.6560360Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0x109
2019-06-12T12:23:55.6560660Z k8s.io/apimachinery/pkg/util/wait.Until(...)
2019-06-12T12:23:55.6560870Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
2019-06-12T12:23:55.6561900Z k8s.io/client-go/tools/cache.(*controller).Run(0xc0007ac000, 0xc00079e060)
2019-06-12T12:23:55.6562870Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/k8s.io/client-go/tools/cache/controller.go:124 +0x447
2019-06-12T12:23:55.6564100Z k8s.io/client-go/tools/watch.NewIndexerInformerWatcher.func4(0x2fda500, 0xc0007ac000, 0xc00079a020)
2019-06-12T12:23:55.6568090Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/k8s.io/client-go/tools/watch/informerwatcher.go:110 +0x62
2019-06-12T12:23:55.6569140Z created by k8s.io/client-go/tools/watch.NewIndexerInformerWatcher
2019-06-12T12:23:55.6570130Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/k8s.io/client-go/tools/watch/informerwatcher.go:109 +0x648
2019-06-12T12:23:55.6570560Z 
2019-06-12T12:23:55.6570840Z goroutine 9 [semacquire, 9 minutes]:
2019-06-12T12:23:55.6571100Z sync.runtime_SemacquireMutex(0xc00069418c, 0x0)
2019-06-12T12:23:55.6571660Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/runtime/sema.go:71 +0x3d
2019-06-12T12:23:55.6571890Z sync.(*RWMutex).RLock(0xc000694180)
2019-06-12T12:23:55.6572230Z 	/Users/vsts/hostedtoolcache/go/1.12.0/x64/src/sync/rwmutex.go:50 +0x9c
2019-06-12T12:23:55.6573190Z k8s.io/client-go/tools/cache/testing.(*FakeControllerSource).List(0xc000694180, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
2019-06-12T12:23:55.6574280Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/k8s.io/client-go/tools/cache/testing/fake_controller_source.go:162 +0x52
2019-06-12T12:23:55.6575270Z k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch(0xc000632140, 0xc00079e060, 0x0, 0x0)
2019-06-12T12:23:55.6576330Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/k8s.io/client-go/tools/cache/reflector.go:178 +0x2c7
2019-06-12T12:23:55.6577260Z k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
2019-06-12T12:23:55.6578400Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/k8s.io/client-go/tools/cache/reflector.go:133 +0x4b
2019-06-12T12:23:55.6578770Z k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc000136f08)
2019-06-12T12:23:55.6579050Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x62
2019-06-12T12:23:55.6579440Z k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007baf08, 0x3b9aca00, 0x0, 0x1, 0xc00079e060)
2019-06-12T12:23:55.6579710Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0x109
2019-06-12T12:23:55.6581860Z k8s.io/apimachinery/pkg/util/wait.Until(...)
2019-06-12T12:23:55.6582180Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
2019-06-12T12:23:55.6583260Z k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000632140, 0xc00079e060)
2019-06-12T12:23:55.6595180Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/k8s.io/client-go/tools/cache/reflector.go:132 +0x1dd
2019-06-12T12:23:55.6595690Z k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
2019-06-12T12:23:55.6596030Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:54 +0x46
2019-06-12T12:23:55.6596440Z k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000670020, 0xc0007a0080)
2019-06-12T12:23:55.6596790Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x5d
2019-06-12T12:23:55.6597190Z created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
2019-06-12T12:23:55.6597540Z 	/Users/vsts/agent/2.152.1/work/1/s/projectriff/riff/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:69 +0x70
2019-06-12T12:23:55.6597830Z 
2019-06-12T12:23:55.6598230Z rax    0x0
2019-06-12T12:23:55.6598520Z rbx    0x3700
2019-06-12T12:23:55.6599000Z rcx    0x7ffeefbfdb48
2019-06-12T12:23:55.6599330Z rdx    0x0
2019-06-12T12:23:55.6599620Z rdi    0x3fe4cc8
2019-06-12T12:23:55.6600010Z rsi    0x370100003800
2019-06-12T12:23:55.6600320Z rbp    0x7ffeefbfdbe0
2019-06-12T12:23:55.6600710Z rsp    0x7ffeefbfdb48
2019-06-12T12:23:55.6601030Z r8     0x0
2019-06-12T12:23:55.6601460Z r9     0x60
2019-06-12T12:23:55.6601860Z r10    0x0
2019-06-12T12:23:55.6602080Z r11    0x202
2019-06-12T12:23:55.6602440Z r12    0x3fe4cc8
2019-06-12T12:23:55.6602760Z r13    0x7ffeefbfdb68
2019-06-12T12:23:55.6603010Z r14    0x1
2019-06-12T12:23:55.6603390Z r15    0x7fff9cb91380
2019-06-12T12:23:55.6603720Z rip    0x7fff645bfa16
2019-06-12T12:23:55.6604100Z rflags 0x202
2019-06-12T12:23:55.6604430Z cs     0x7
2019-06-12T12:23:55.6604720Z fs     0x0
2019-06-12T12:23:55.6605090Z gs     0x0
2019-06-12T12:23:55.6605410Z *** Test killed with quit: ran too long (10m0s).
2019-06-12T12:23:55.6605800Z FAIL	github.com/projectriff/riff/pkg/k8s	600.185s

Decouple `riff doctor` command from knowledge of the cluster internals

The riff doctor command gives users the ability to perform basic checks on the health of the cluster and whether riff is ready for use. We should not bake intimate knowledge of the system runtime into the CLI.

For example, the command currently looks for the knative-build namespace. Knative Build is an implementation detail that is likely to change in the near future.

We have a few options:

  1. live with the tight coupling
  2. reduce the scope of riff doctor to only focus on user land concerns
  3. create a generic configuration that lives in the cluster that the doctor command can process

riff ... delete --all should list the resources that are deleted

The current behavior is to just display

$ riff function delete --all
Deleted functions in namespace "default"

It would be nicer to mimic kubectl and list the deleted resources

$ kubectl delete function --all
function.build.projectriff.io "upper" deleted

Local path builds fail on Windows pushing to Docker Hub

�[90m�[0m�[36m===> EXPORTING�[0m
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:47 adding layer 'app' with diffID 'sha256:b014c26d6de60d21e1a0127a5935d36ff2f2216f82518e9273a1810b5c3e6fdc'
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:47 reusing layer 'config' with diffID 'sha256:3b37c851a265be8d7b3bd5a3b0c07c0597516fdf6b8d531b88a71d5e11cd6358'
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:47 reusing layer 'launcher' with diffID 'sha256:7336e39d373301c05eda65aa8640073f95983915894c567cb818668700ded279'
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 reusing layer 'org.cloudfoundry.buildpacks.nodejs:node' with diffID 'sha256:1098b725e448f481e96249b29628f201b38ff226f71655215ba3a49366f0b03c'
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 reusing layer 'io.projectriff.node:function' with diffID 'sha256:f7d33ea91a5275f6b5af0a262c3c15380550486f42bea8759ae0a0498b090aec'
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 reusing layer 'io.projectriff.node:riff-invoker-node' with diffID 'sha256:fa93fae25ce7004dbe072906aea6910bf208c7de0481f8cd424e63bcff9aa045'
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 setting metadata label 'io.buildpacks.lifecycle.metadata'
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 setting env var 'PACK_LAYERS_DIR=/workspace'
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 setting env var 'PACK_APP_DIR=/workspace/app'
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 setting entrypoint '/lifecycle/launcher'
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 setting empty cmd
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 writing image
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 existing blob: sha256:6e1bee0f8701f0ae53a5129dc82115967ae36faa30d7701b195dfc6ec317a51d
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 existing blob: sha256:482a4e60757d04a87cc6376ed2cb9a14bcd57e3db1e0040ef751ea76984762d3
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 existing blob: sha256:b55f440908175a0dc4f59fb37de18b5cfcdc8df272a33b4f6a50de729e60326e
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 existing blob: sha256:1f23a30701c695154a9fd2ac35371e086dac4ddfcafb8e9a400225831f8866a0
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 existing blob: sha256:fc5bba5ed9df3a0eb3fbd88053bf748bab3876ee167b98fa9458172841d01876
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 existing blob: sha256:869da0e46476c97e3be5be0fe11fdc46f9dfdb2ce2bc3698d2970e60d2a2378a
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 existing blob: sha256:890186096aea65abd0ed2ee77cfac580dc7fd84bd2a34f6ddbed0adb6f93e986
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 existing blob: sha256:63366dfa0a5076458e37ebae948bc7823bab256ca27e09ab94d298e37df4c2a3
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 existing blob: sha256:e2af0ac73140597cf13e9b2086aea83829b4826c1da8c218f166955290e5d357
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 existing blob: sha256:898c46f3b1a1f39827ed135f020c32e2038c87ae0690a8fe73d94e5df9e6a2d6
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 existing blob: sha256:041d4cd74a929bc4b66ee955ab5b229de098fa389d1a1fb9565e536d8878e15f
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 existing blob: sha256:339c535925dde2ef0520b7baa9001ce43d14958f31a217f2b1855d7787018047
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 existing blob: sha256:60048cb8980b0538857792e86bd305372837ebfd587c1a58a00faf443240d089
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 existing blob: sha256:c52382f35f927227be148ab50b62aa675ed88af4ae1720519fe0f19e46a44867
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 existing blob: sha256:8d1a9d129ebeb1f1ac335fd5846bbe5a2a75bbf872c227f0287df4ac94bb319a
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 Error: failed to : UNAUTHORIZED: authentication required; [map[Type:repository Class: Name:trisberg/square-node Action:pull] map[Type:repository Class: Name:trisberg/square-node Action:push]]

Error: failed with status code: 7

Seems related to buildpacks/pack#109

Decouple from knative/pkg

The riff cli does not interact with Knative types, it should not have a direct dependency on github.com/knative/pkg. github.com/projectriff/system should encapsulated any usage of knative types rather than letting them leak through.

Remind me to initialize the namespace

Sometimes my builds fail and I'm not sure why. After a bit of looking at the logs and head scratching, it dawns on me that I forgot to initialize the namespace. Perhaps riff can detect that and alert me proactively.

Provide a user friendly error if a runtime is not installed

If a user runriff knative deployer create in a cluster that does not have the knative runtime installed, the deployer is created, but the user receives no feedback. Running riff knative deployer list says the status is perpetually Unknown.

Remove `handler invoke` command

The riff handler invoke command is hidden because it's not fully baked, and most likely it never will be as it makes too many assumptions about the network between the user and the cluster. Moreover as loadbalancer services are vendor specific, there is no way we can support all environments.

Either we need to find a good way to support all clusters generically, something in the spirit of kubectl proxy, or we should completely remove the command. As it stands people who know the command exists will use it and expect it to be a fully supported part of riff.

riff core deployer create --tail doesn't show any logs

Creating a core deployer doesn't show the logs:

$ riff core deployer create upper --function-ref upper --tail
Created deployer "upper"

while the same with the knative deployer does:

$ riff knative deployer create upper --function-ref upper --tail
Created deployer "upper"
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]: 
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]:                 ?      ????????????
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]:                 ???    ??  ????  ??
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]:                ???? ??????????????
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]:               ?????   ???    ??
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]:    ?????????  ??      ???    ??
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]:     ???   ??  ??      ???    ??
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]:     ???       ??     ?????  ?????
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]:     ???       ??
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]:    ??????      
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]: 
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]: :: Powered by Spring Boot ::        
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]: Aug 09, 2019 8:49:55 PM org.springframework.boot.StartupInfoLogger logStarting
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]: INFO: Starting application on upper-s84ms-deployment-6db4d5f5c6-4cvrr with PID 1 (/layers/io.projectriff.java/riff-invoker-java/BOOT-INF/classes started by root in /workspace)
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]: Aug 09, 2019 8:49:55 PM org.springframework.boot.SpringApplication logStartupProfileInfo
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]: INFO: No active profile set, falling back to default profiles: default
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]: Aug 09, 2019 8:49:58 PM org.springframework.cloud.function.deployer.FunctionCreatorConfiguration init
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]: INFO: Locating function from [file:/workspace]
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]: Aug 09, 2019 8:49:58 PM org.springframework.cloud.function.deployer.FunctionCreatorConfiguration$BeanCreator create
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]: INFO: No bean found. Instantiating: functions.Upper
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]: Aug 09, 2019 8:49:59 PM org.springframework.cloud.function.web.flux.FunctionHandlerMapping <init>
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]: INFO: FunctionCatalog: org.springframework.cloud.function.context.config.ContextFunctionCatalogAutoConfiguration$BeanFactoryFunctionCatalog@765723fc
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]: Aug 09, 2019 8:50:00 PM org.springframework.boot.web.embedded.netty.NettyWebServer start
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]: INFO: Netty started on port(s): 8080
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]: Aug 09, 2019 8:50:00 PM org.springframework.boot.StartupInfoLogger logStarted
default/upper-s84ms-deployment-6db4d5f5c6-4cvrr[user-container]: INFO: Started application in 6.935 seconds (JVM running for 8.543)

--local-path builds not available in Windows

PS > riff function create square `
>>   --local-path c:\Users\hello\square `
>>   --artifact square.js `
>>   --tail

Error executing command:
  invalid value: --local-path is not available on Windows: --local-path

Enable CLI to define a volume mount for secrets/configMaps

We should make it possible to mount a secret or a configMap as a volumeMount for a function/app container. It should also be possible to define the mountPath and file path for the secret/configMap item.

I suggest the following syntax:
--volume-mount name:/mountPath/itemPath={secretRef|configMapRef}:resource:key

Example:

--volume-mount config:/config/application-refresh.yaml=secretRef:mypets-refresh:application.yaml

should result in:

        volumeMounts:
        - name: config
          mountPath: "/config"
          readOnly: true
      volumes:
      - name: config
        secret:
          secretName: mypets-refresh
          items:
          - key: application.yaml
            path: application-refresh.yaml

can't set default image prefix in empty riff-build configMap

If a riff-build configMap exists but the data section is empty I get an error setting the default-image-prefix

$ riff credentials apply docker-push --docker-hub trisberg --set-default-image-prefix
Docker Hub password: 
Apply credentials "docker-push"
panic: assignment to entry in nil map

goroutine 1 [running]:
github.com/projectriff/riff/pkg/riff/commands.setDefaultImagePrefix(0xc000134780, 0xc0000f2370, 0xc000442020, 0x12, 0x15, 0xc000565cb0)
	/Users/trisberg/workspace/projectriff/riff/pkg/riff/commands/credential_apply.go:264 +0x129
github.com/projectriff/riff/pkg/riff/commands.(*CredentialApplyOptions).Exec(0xc0000f2370, 0x231a4c0, 0xc000276000, 0xc000134780, 0x20f9980, 0xc00015db80)
	/Users/trisberg/workspace/projectriff/riff/pkg/riff/commands/credential_apply.go:114 +0x215
github.com/projectriff/riff/pkg/cli.ExecOptions.func1(0xc00015db80, 0xc000084c00, 0x1, 0x4, 0x0, 0x0)
	/Users/trisberg/workspace/projectriff/riff/pkg/cli/options.go:65 +0xba
github.com/spf13/cobra.(*Command).execute(0xc00015db80, 0xc000084bc0, 0x4, 0x4, 0xc00015db80, 0xc000084bc0)
	/Users/trisberg/go/pkg/mod/github.com/spf13/[email protected]/command.go:762 +0x465
github.com/spf13/cobra.(*Command).ExecuteC(0xc00015cc80, 0xc00015cc80, 0x38, 0x20fca6d)
	/Users/trisberg/go/pkg/mod/github.com/spf13/[email protected]/command.go:852 +0x2c0
github.com/spf13/cobra.(*Command).Execute(...)
	/Users/trisberg/go/pkg/mod/github.com/spf13/[email protected]/command.go:800
main.main()
	/Users/trisberg/workspace/projectriff/riff/cmd/riff/main.go:36 +0x58

Avoid the color coded output from pack when running on Windows

It looks ugly in Power Shell:

�[90m�[0m�[36m===> EXPORTING�[0m
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:47 adding layer 'app' with diffID 'sha256:b014c26d6de60d21e1a0127a5935d36ff2f2216f82518e9273a1810b5c3e6fdc'
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:47 reusing layer 'config' with diffID 'sha256:3b37c851a265be8d7b3bd5a3b0c07c0597516fdf6b8d531b88a71d5e11cd6358'
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:47 reusing layer 'launcher' with diffID 'sha256:7336e39d373301c05eda65aa8640073f95983915894c567cb818668700ded279'
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 reusing layer 'org.cloudfoundry.buildpacks.nodejs:node' with diffID 'sha256:1098b725e448f481e96249b29628f201b38ff226f71655215ba3a49366f0b03c'
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 reusing layer 'io.projectriff.node:function' with diffID 'sha256:f7d33ea91a5275f6b5af0a262c3c15380550486f42bea8759ae0a0498b090aec'
�[90m�[0m[�[36mexporter�[0m] 2019/03/25 18:59:48 reusing layer 'io.projectriff.node:riff-invoker-node' with diffID 'sha256:fa93fae25ce7004dbe072906aea6910bf208c7de0481f8cd424e63bcff9aa045'
...

The pack command has a --no-color flag

Duplicate log messages from `riff function create --verbose`

Sometimes when running riff function create with the --verbose flag, the command will emit duplicate chunks logs. In some cases logs are duplicated 14 times. Unlike with Knative Build 0.3, all of the logs are reported as originating from the same pod and there is only one build pod in the cluster.

Best guess is that kail is somehow getting confused about it's state.

This is a nuisance, but is not harmful as the system is doing the right thing under the hood, only the output from the riff cli is affected.

Resource name shell completion

We have shell completion for command names, but it would even more helpful if the completion worked with resource names as well. For example, to delete a function the command is:

riff function delete my-function

Shell completion works for:

  • riff fun<tab> -> riff function
  • riff function de<tab> -> riff function delete

But does not currently work for:

  • riff function delete my<tab>

CLI DX: kafka provider name does not match --provider option for stream create

CLI users have to translate the <name> of the kafka provider into <name>-kafka-provisioner when they create a stream using that provider. The CLI help message provides no help in this regard.

riff streaming kafka-provider create franz --bootstrap-servers kafkabroker:9092

riff streaming stream create in  --provider franz-kafka-provisioner --content-type application/json 

riff service status should provide more details for service in error state

Currently, riff service status prints just the Ready status condition, e.g.:

$ riff service status square --namespace test
Last Transition Time:        2018-08-17T15:51:08+01:00
Message:                     Configuration "square" does not have any ready Revision.
Reason:                      RevisionMissing
Status:                      False
Type:                        Ready

whereas the other status conditions may provide more detail, e.g.:

$ kubectl describe service.serving.knative.dev square -n test
...
Status:
  Conditions:
    Last Transition Time:  2018-08-17T14:51:08Z
    Message:               Revision creation failed with message: "Internal error occurred: admission webhook \"webhook.build.knative.dev\" denied the request: mutation failed: serviceaccounts \"riff-build\" not found".
    Reason:                RevisionFailed
    Status:                False
    Type:                  ConfigurationsReady
    Last Transition Time:  2018-08-17T14:51:08Z
    Message:               Configuration "square" does not have any ready Revision.
    Reason:                RevisionMissing
    Status:                False
    Type:                  Ready
    Last Transition Time:  2018-08-17T14:51:08Z
    Message:               Configuration "square" does not have any ready Revision.
    Reason:                RevisionMissing
    Status:                False
    Type:                  RoutesReady
  Domain:                  square.test.example.com
  Domain Internal:         square.test.svc.cluster.local
  Observed Generation:     1
Events:                    <none>

The command should, in the error case, show any additional status conditions, e.g.:

$ riff service status square --namespace test
Last Transition Time:        2018-08-17T15:51:08+01:00
Message:                     Configuration "square" does not have any ready Revision.
Reason:                      RevisionMissing
Status:                      False
Type:                        Ready
Last Transition Time:  2018-08-17T14:51:08+01:00
Message:               Revision creation failed with message: "Internal error occurred: admission webhook \"webhook.build.knative.dev\" denied the request: mutation failed: serviceaccounts \"riff-build\" not found".
Reason:                RevisionFailed
Status:                False
Type:                  ConfigurationsReady
Last Transition Time:  2018-08-17T14:51:08+01:00
Message:               Configuration "square" does not have any ready Revision.
Reason:                RevisionMissing
Status:                False
Type:                  RoutesReady

Confused about riff function create docs

The docs has a synopsis:

riff function create [flags]

but then all the examples are

riff function create <language> <name> [flags]

and it looks like probably <language> and <name> are mandatory? It's even more unclear which of the flags is mandatory (or what they actually mean).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.