Git Product home page Git Product logo

marketplace-k8s-app-tools's Introduction

Overview

This repository contains a set of tools supporting the development of Kubernetes applications deployable via Google Cloud Marketplace.

Getting Started

See the how to build your application deployer documentation.

References

Examples

  • The marketplace-k8s-app-example repository contains example applications.

  • The click-to-deploy repository contains more examples. This is the source code backing Google Click to Deploy Kubernetes applications listed on Google Cloud Marketplace.

Coding style

We follow Google's coding style guides.

Development

Setting up

Log in gcloud with a Service Account

Instead of using your personal credential to log in, it's recommended to use a Service Account instead.

A new Service Account and proper permissions can be created using the following commands. PROJECT-ID is the (non-numeric) identifier of your GCP project. This assumes that you're already logged in with gcloud.

gcloud iam service-accounts create \
  marketplace-dev-robot \
  --project PROJECT-ID \
  --display-name "GCP Marketplace development robot"

gcloud projects add-iam-policy-binding PROJECT-ID \
  --member serviceAccount:[email protected] \
  --role roles/editor

gcloud projects add-iam-policy-binding PROJECT-ID \
  --member serviceAccount:[email protected] \
  --role roles/container.admin

The created Service Account email will be [email protected]. Note that you can replace marketplace-dev-robot with another name.

Now you can switch gcloud to using the Service Account by creating and downloading a one-time key, and activate it.

gcloud iam service-accounts keys create ~/marketplace-dev-robot-key.json \
  --iam-account [email protected]

gcloud auth activate-service-account \
  --key-file ~/marketplace-dev-robot-key.json

You should keep ~/marketplace-dev-robot-key.json credential key in a safe location. Note that this is the only copy; the generated key cannot be downloaded again.

Log in application default credentials for kubectl

kubectl connecting to GKE requires application default credentials. Log in using the following command:

gcloud auth application-default login

Running the doctor command

At the very least, you need to connect to a GKE cluster. Follow this instruction to ensure you have a properly setup environment.

Run tests locally

Run unit tests:

make tests/py

Run integration tests:

make tests/integration

Build deployers locally

Set deployers container tag:

export MARKETPLACE_TOOLS_TAG=local-$USER

Build container images:

make marketplace/build

marketplace-k8s-app-tools's People

Contributors

agarg2008 avatar andreychoi avatar brendanlundy avatar danhipke avatar danielmkn avatar deustis avatar erikcandeia avatar eshiroma avatar geofrau avatar gibbleyg avatar huixinlai avatar huyhg avatar janetkuo avatar jprzychodzen avatar justmike1 avatar jvolkman avatar khajduczenia avatar lwander avatar matt-jns avatar nimishc avatar ovk6 avatar stevecs-google avatar syedabutalib avatar thiagolacerda avatar tieske avatar trironkk avatar ukclivecox avatar vcanaa avatar wendyqian11 avatar wgrzelak avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

marketplace-k8s-app-tools's Issues

Minor feature request - verify schema integrity before or when building deployment container

Say I have a schema.yaml like this:

properties:
  foo:
    type: string
    default: hi

required:
- bar
- baz

This schema doesn't make any sense, and the deployment container is guaranteed to fail when at deploy time the python check script sees that you're requiring a variable that wasn't specified and hence probably wasn't passed.

It'd be helpful if when I'm building my docker container (like so: https://github.com/GoogleCloudPlatform/marketplace-k8s-app-example/blob/master/nginx/Makefile#L31) that there be some validation steps/checks.

Lacking this, I'm usually going through a cycle of build/push container, run start.sh job, monitor kubernetes logs, to determine that what I put into the container never made sense in the first place.

Minor feature request - kubernetes specific datatypes for schema.yml

In the way you provide these:

  reportingSecret:
    type: string
    x-google-marketplace:
      type: REPORTING_SECRET  

There are certain special datatypes which presumably the UI will enforce, this should be a base64 encoded SA key, I guess.

So too other kubernetes primitives would be really useful as data types. For example:

  • Volume sizes/disk allocations
  • CPU resource requests

E.g. I want to provide variables like this:

  cpu:
    title: CPU units per node
    type: string
    default: "1"
  mem:
    title: Memory per node
    type: string
    default: "2Gi"
  volumeSize:
    title: Disk allocation to core nodes
    type: string
    default: "10Gi"
  volumeStorageClass:
    title: Storage class for nodes, pd-ssd for SSD storage recommended.
    type: string
    default: "pd-ssd"
    enum:
    - pd-ssd
    - pd-standard

I'm imagining something like standardized regexes or specialized datatypes so that I can ensure the user doesn't request -3m CPUs, or fishGi of memory.

Documentation is spread about between onboarding guide, tools, and example repos

@moxious reported:
"It seems most of the right docs are in place but right now I think the challenge is more that they're in many places. There are several repos which git submodule one another, and READMEs in various spots. It just takes some reading and discovery."

My 2c:

  • The onboarding guide should provide a minimal outline of all the development steps, and highlight the key contracts (e.g. Job spec).
  • The tools repo README.md should have an overview describing the important tools (base images, start.sh, etc.), and perhaps link to other more detailed README.md files in the tools repo.
  • The example repo README.md should provide minimal instructions for installing an example app

Current master branch broken due to minor typo

This right here:
https://github.com/GoogleCloudPlatform/marketplace-k8s-app-tools/blob/master/marketplace/deployer_util/resources.py#L29

Should be app_uid (see line 16, same file)

+ separate_tester_resources.py --app_uid ef330c74-7ad1-11e8-8eef-42010a8000e8 --app_name testrun --app_api_version app.k8s.io/v1alpha1 --manifests /data/resources.yaml --out_manifests /data/resources.yaml --out_test_manifests /data/tester.yaml
Reading /data/resources.yaml
INFO Prod resource: Secret/testrun-neo4j-secrets
INFO Prod resource: ConfigMap/testrun-neo4j-ubc
INFO Prod resource: Service/testrun-neo4j
INFO Prod resource: Service/testrun-neo4j-readreplica-svc
INFO Tester resource: Pod/testrun-tester
Traceback (most recent call last):
  File "/bin/separate_tester_resources.py", line 82, in <module>
    main()
  File "/bin/separate_tester_resources.py", line 68, in main
    resource=resource)
  File "/bin/resources.py", line 29, in set_resource_ownership
    if existing_owner_reference['uid'] == appuid:
NameError: global name 'appuid' is not defined
INFO Deleting namespace "apptest-5bccab40-b023-4158-9efe-e728cb156d84"
namespace "apptest-5bccab40-b023-4158-9efe-e728cb156d84" deleted
make: *** [app/verify] Error 1

This commit: 4a38574#diff-e2d705b7216024c83c27a080df463dc4 updated the name of the parameter, but not its usage in the function body.

Helm deployer create_manifests.sh requires NAME, NAMESPACE

This line right here: https://github.com/GoogleCloudPlatform/marketplace-k8s-app-tools/blob/master/marketplace/deployer_helm_base/create_manifests.sh#L39

If my schema.yaml doesn't use NAME and NAMESPACE as variable names, then the deployer fails every time.

From the wordpress example and others, it should be that using the lower case equivalents can work. I've been following those other examples but have found that the lower case version can't work, it seems to die right there after expand_config requires that the schema contain NAME and NAMESPACE.

As a work-around, I'm just using those upper-case variants, but this seems to be at odds with the rest, and causes some confusing inconsistencies and errors.

"Running on GCE" setup steps are confusing

@moxious reports:
"The docs just above the cluster setup docs are a tad confusing, in that I can't tell whether you're ultimately recommending to do all of these steps from a GCE VM (in which case extra steps you've provided are needed) or not. I'm doing it all from a shell on my local machine."

My 2c: agree, most users are going to use a local development environment. We should move the "running on GCE" steps to an appendix.

Recommendation on how to signal application-level successful deploy

In deploy_with_tests.sh, prior to deploying test resources, the wait_for_ready.py script runs to determine that the base deploy resources all get to a healthy state.

In the case of some deploys though, application-level things may not yet be initialized (for example, in a cluster formation; deploy 3 pods, they all come up successfully, but the software within them isn't running yet because the cluster hasn't formed)

In this case, do you recommend:

  • Modify the base deploy resources (how) to not signal application health until this condition has been met
  • Add checks into the test resources to poll/wait until the application is fully up before proceeding
  • Something else?

Should setownership.py handle yaml Lists?

As the logic just looks at top level resource Kind.

for resource in resources:
if included_kinds is None or resource["kind"] in included_kinds:
log("Application '{:s}' owns '{:s}/{:s}'".format(
app_name, resource["kind"], resource["metadata"]["name"]))
resource = copy.deepcopy(resource)
set_resource_ownership(app_uid=app_uid,
app_name=app_name,
app_api_version=app_api_version,
resource=resource)

Do we need this in our schema.yaml to run in GKE Launcher runtime environment?

operatorServiceAccount:
type: string
x-google-marketplace:
type: SERVICE_ACCOUNT
serviceAccount:
roles:
# You can list one or more roles following the examples below.
- type: ClusterRole # This is a cluster-wide ClusterRole
rulesType: PREDEFINED
rulesFromRoleName: edit # Use predefined role named "edit"
- type: Role # This is a namespaced Role
rulesType: CUSTOM # We specify our own custom RBAC rules
rules:
- apiGroups: ['apps.kubernetes.io/v1alpha1']
resources: ['Application']
verbs: ['*']

In development cycle, we use "make app/install" which creates the service account on our behalf. Wonder if we need the above defined in our schema.yaml to run in real GKE Launcher environment.

Deploy process exits with error code 1 after succeeding

Tail end of the log dump:

deployer | May 9, 2018, 1:45:08 PM | + echo 'Marking deployment of application "myneo4jtest" as succeeded.'
-- | -- | --
deployer | May 9, 2018, 1:45:08 PM | Marking deployment of application "myneo4jtest" as succeeded.
deployer | May 9, 2018, 1:45:08 PM | + kubectl patch applications/myneo4jtest --namespace=default --type=merge --patch 'metadata:
deployer | May 9, 2018, 1:45:08 PM | annotations:
deployer | May 9, 2018, 1:45:08 PM | kubernetes-engine.cloud.google.com/application-deploy-status: Succeeded'
deployer | May 9, 2018, 1:45:08 PM | application "myneo4jtest" not patched

Under workloads, the deployer job is marked Error with exit code 1.

This line right here is failing. Unsure as to why, or whether or not this is even important, but noteworthy that this patch failure causes the (perceived) failure of the entire deploy.

https://github.com/GoogleCloudPlatform/marketplace-k8s-app-tools/blob/master/marketplace/deployer_util/post_success_status.sh#L26

Helm deployer should not clobber Chart's values.yaml

Reported by @moxious:

If a Helm chart already has a values.yaml file, the deployer should not overwrite it. Instead, the deployer should create an additional values file based on the ConfigMap, overriding only those specific values.

This should be possible using the -f command flag. Something like:
helm template -f overrides.yaml

Recent refresh from master breaks building deployer_helm

After refreshing from the master branch, make app/build in my app is now broken with the following error:

 ---> c94f5b56dcb7
Step 11/15 : FROM gcr.io/google-marketplace-tools/k8s/deployer_helm
Get https://gcr.io/v2/google-marketplace-tools/k8s/deployer_helm/manifests/latest: denied: Token exchange failed for project 'google-marketplace-tools'. Please enable or contact project owners to enable the Google Container Registry API in Cloud Console at https://console.cloud.google.com/apis/api/containerregistry.googleapis.com/overview?project=google-marketplace-tools before performing this operation.

It is possible something in this commit did it? 201334a

@huyhg

Need base image updates (critical vulns) to deployer_helm_base

I'm developing a deployment container using this as a base:

https://github.com/GoogleCloudPlatform/marketplace-k8s-app-tools/tree/master/marketplace/deployer_helm_base

GCR of course performs automatic security scanning, and for my image (which is just a bit of shell layered on top of this other one) it came up with 264 vulnerabilities found: 4 Critical, 57 High, 172 Medium, 15 Low, 16 Unknown

I assume that most or all of these are derived from the base of your base, launcher.gcr.io/google/debian9.

To be clear this is not a pressing need, because these deployment containers I expect are going to be transient and passing, but I noticed it, thought it would be useful to bring it up.

Better documentation: Add descriptions for all scripts

All scripts should have comments at the top describing what they are doing. This includes bash as well as python scripts.

Most bash scripts don't have documentation while most python scripts currently do. We just need to keep them consistent!

config_helper.py doesn't supported dotted param names

Downstream of #73, I corrected that and moved on, found another gotcha. See dump below.

This looks over-comeable by just renaming my parameter. I'd prefer not to do that because I'm basing work off of another helm chart, and less drift from the conventions there would be desirable to me for maintenance purposes. But I can change it if this is not supportable. If it isn't, it might be good to document this as a limitation of what you can do in schema.yaml.

deployer | May 8, 2018, 11:00:42 AM | config_helper.InvalidName: Invalid config parameter name: core.numberOfServers
-- | -- | --
deployer | May 8, 2018, 11:00:42 AM | raise InvalidName('Invalid config parameter name: {}'.format(filename))
deployer | May 8, 2018, 11:00:42 AM | File "/bin/config_helper.py", line 34, in read_values_to_dict
deployer | May 8, 2018, 11:00:42 AM | values = read_values_to_dict(args.values_dir, args.encoding)
deployer | May 8, 2018, 11:00:42 AM | File "/bin/expand_config.py", line 61, in main
deployer | May 8, 2018, 11:00:42 AM | main()
deployer | May 8, 2018, 11:00:42 AM | File "/bin/expand_config.py", line 128, in <module>
deployer | May 8, 2018, 11:00:42 AM | Traceback (most recent call last):
deployer | May 8, 2018, 11:00:41 AM | + /bin/expand_config.py

RBAC doesn't work on a GCE instance

The user for kubectl on a GCE instance is neither the user nor the gcloud service account. We need to find out (1) where this user comes from and (2) how to grant cluster-admin role for this user.

Support deployer parameters specified as YAML and JSON

Currently we mount the parameter config map into a directory, where file names are parameter names and file contents parameter values. The limitation of this approach is that values have to be stringified as file contents.

We want to support specifying a single YAML or JSON file containing a dictionary of parameter names and values. Besides different mounting, the config map content will have to change: instead of multiple keys, it will have a single key and content of yaml or json.

The deployer will continue to support all 3 modes of passing parameters in the following precedence:

  • (A) /data/values.yaml
  • (B) /data/values.json
  • (C) /data/values/*

The UI will choose to configure both (C) and either (A) or (B) for a period of time for backward compatibility, similar to what it's doing right now with environment variable (although it's time to drop the env variables). Then it will drop (C).

start.sh appears not to support integer arguments

See below:

Particulars of what I'm calling here don't really matter because I'm iterating pretty quickly. Just note the "core.numberOfServers":4, bit, which corresponds to a schema.yaml entry like this:

  core.numberOfServers:
    title: Server replicas
    type: int
    default: 3
    minimum: 3

Full Dump demonstrating the integer error:

$ vendor/marketplace-k8s-app-tools/scripts/start.sh \
>    --deployer=$DEPLOYER_IMAGE \
>    --parameters='{"NAMESPACE": "default", "APP_INSTANCE_NAME": "myneo4j", "core.numberOfServers":4, "image": "gcr.io/neo4j-k8s-marketplace-public/neo4j:3.3.5-enterprise"}'
+ set -e
+ set -o pipefail
+ for i in '"$@"'
+ case $i in
+ deployer=gcr.io/neo4j-k8s-marketplace-public/neo4j-deployer:latest
+ shift
+ for i in '"$@"'
+ case $i in
+ parameters='{"NAMESPACE": "default", "APP_INSTANCE_NAME": "myneo4j", "core.numberOfServers":4, "image": "gcr.io/neo4j-k8s-marketplace-public/neo4j:3.3.5-enterprise"}'
+ shift
+ [[ -z gcr.io/neo4j-k8s-marketplace-public/neo4j-deployer:latest ]]
+ [[ -z {"NAMESPACE": "default", "APP_INSTANCE_NAME": "myneo4j", "core.numberOfServers":4, "image": "gcr.io/neo4j-k8s-marketplace-public/neo4j:3.3.5-enterprise"} ]]
+ [[ -z '' ]]
+ entrypoint=/bin/deploy.sh
++ echo '{"NAMESPACE": "default", "APP_INSTANCE_NAME": "myneo4j", "core.numberOfServers":4, "image": "gcr.io/neo4j-k8s-marketplace-public/neo4j:3.3.5-enterprise"}'
++ jq -r .APP_INSTANCE_NAME
+ name=myneo4j
++ echo '{"NAMESPACE": "default", "APP_INSTANCE_NAME": "myneo4j", "core.numberOfServers":4, "image": "gcr.io/neo4j-k8s-marketplace-public/neo4j:3.3.5-enterprise"}'
++ jq -r .NAMESPACE
+ namespace=default
+ kubectl apply --namespace=default --filename=-
application "myneo4j" configured
++ kubectl get applications/myneo4j --namespace=default '--output=jsonpath={.metadata.uid}'
+ application_uid=2e956adc-52cf-11e8-a663-42010a80010e
+ kubectl apply --namespace=default --filename=-
serviceaccount "myneo4j-deployer-sa" unchanged
rolebinding "myneo4j-deployer-rb" unchanged
+ kubectl apply --filename=- --output=json --dry-run
+ jq -s '.[0].data = .[1] | .[0]' - /dev/fd/63
+ kubectl apply --namespace=default --filename=-
++ echo '{"NAMESPACE": "default", "APP_INSTANCE_NAME": "myneo4j", "core.numberOfServers":4, "image": "gcr.io/neo4j-k8s-marketplace-public/neo4j:3.3.5-enterprise"}'
Error from server: error when applying patch:
{"data":{"core.numberOfServers":4,"image":"gcr.io/neo4j-k8s-marketplace-public/neo4j:3.3.5-enterprise"},"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"data\":{\"APP_INSTANCE_NAME\":\"myneo4j\",\"NAMESPACE\":\"default\",\"core.numberOfServers\":4,\"image\":\"gcr.io/neo4j-k8s-marketplace-public/neo4j:3.3.5-enterprise\"},\"kind\":\"ConfigMap\",\"metadata\":{\"annotations\":{},\"creationTimestamp\":\"2018-05-08T14:50:44Z\",\"name\":\"myneo4j-deployer-config\",\"namespace\":\"default\",\"ownerReferences\":[{\"apiVersion\":\"v1alpha\",\"blockOwnerDeletion\":true,\"kind\":\"Application\",\"name\":\"myneo4j\",\"uid\":\"2e956adc-52cf-11e8-a663-42010a80010e\"}],\"resourceVersion\":\"5498\",\"selfLink\":\"/api/v1/namespaces/default/configmaps/myneo4j-deployer-config\",\"uid\":\"2f4efec7-52cf-11e8-a663-42010a80010e\"}}\n"}}}
to:
&{0xc421386780 0xc42024b030 default myneo4j-deployer-config STDIN 0xc4215da5e0 0xc4215da880 5498 false}
for: "STDIN": cannot convert int64 to string

Attach additional labels to Helm Chart's resources

Following the current convention proposed for the SIG Apps, resources being part of the application should have labels of "app.kubernetes.io/name".

After discussing the issue with @huyhg - helm's base deployer should automatically add such labels to the resources, if they are not present. Typically, they should be of the same value as the conventional "release" label.

start.sh should specify restartPolicy=Never

Right here in start.sh: https://github.com/GoogleCloudPlatform/marketplace-k8s-app-tools/blob/master/scripts/start.sh#L134

I would suggest adding "Never" as a restart policy. While in development my deployment container is crashing a lot (due to various things I'm working through) and I notice it's crashing in a loop.

Because deploy containers don't necessarily promise to be idempotent, I would think restarting a deploy container would be very undesirable. In the worst case, creating resources in a loop would be very very bad. :)

Run app.Makefile scripts from within containers

Today we implicitly require that python, kubectl, jq are installed (and of the same version). Ideally we would remove this setup step and gotcha by running those tools from within the context of containers that are configured with the same environment as the deployer containers.

Test resources properly sorted and applied, but Pods not found

As a starting point I'm trying to apply testing similar to the nginx example. I believe everything is set up properly now (and the nginx make app/verify does work for me locally). Applying it to my configuration though, this testing fails, maybe somewhere in run_tester.py.

The log below shows what test resources I'm applying, and seems to prove that separate_tester_resources.py worked correctly. The deployer step does succeed and everything seems to be OK I can see in the GKE console. Run_tester.py should apply the test resources, but it doesn't seem to be able to see anything it applied. Consequently, it fails to find those pods and the process fails.

Watching this in the GKE console as it happens, I can't see any evidence that testrun-tester ever really got created, although I did some debugging on run_tester.py and the kubectl apply definitely gets called.

Any suggestions? My git repo is up to date with everything needed to reproduce.

Log:

+ wait_for_ready.py --name testrun --namespace apptest-0816965a-e358-4eb3-9058-7ec8589ab63a --timeout 300
INFO Wait 300 seconds for the application 'testrun' to get into ready state
INFO top level resources: 10
INFO Initialization: Found applications/testrun ready status to be True.
INFO Wait 30 seconds to make sure app stays in healthy state.
INFO top level resources: 10
INFO top level resources: 10
INFO top level resources: 10
INFO top level resources: 10
INFO top level resources: 10
INFO top level resources: 10
+ tester_manifest=/data/tester.yaml
+ [[ -e /data/tester.yaml ]]
+ cat /data/tester.yaml
---
apiVersion: v1
kind: Pod
metadata:
  annotations:
    marketplace.cloud.google.com/verification: test
  labels:
    app.kubernetes.io/name: testrun
  name: testrun-tester
  ownerReferences:
  - apiVersion: app.k8s.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: Application
    name: testrun
    uid: ce8ef389-7336-11e8-81de-42010a80012f
  - apiVersion: app.k8s.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: Application
    name: testrun
    uid: ce8ef389-7336-11e8-81de-42010a80012f
spec:
  containers:
  - args:
    - /tester/run.sh
    command:
    - bash
    image: gcr.io/neo4j-k8s-marketplace-public/neo4j-tester:latest
    name: tester
    volumeMounts:
    - mountPath: /tester
      name: config-volume
  restartPolicy: Never
  volumes:
  - configMap:
      name: testrun-test
    name: config-volume
---
apiVersion: v1
data:
  run.sh: "set -x\nendpoint=\"http://testrun-neo4j:apptest-0816965a-e358-4eb3-9058-7ec8589ab63a.svc.cluster.local:7474\"\
    \necho GET $endpoint\nhttp_status_code=$(curl -o /dev/null -s -w \"%{http_code}\\\
    n\" $endpoint)\necho \"Expected http status code: 200\"\necho \"Actual http status\
    \ code: $http_status_code\"\nif [[ \"$http_status_code\" == \"200\" ]]; then\n\
    \  echo SUCCESS\nelse\n  echo FAILURE\n  exit 1\nfi"
kind: ConfigMap
metadata:
  annotations:
    marketplace.cloud.google.com/verification: test
  labels:
    app.kubernetes.io/name: testrun
  name: testrun-test
  ownerReferences:
  - apiVersion: app.k8s.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: Application
    name: testrun
    uid: ce8ef389-7336-11e8-81de-42010a80012f
  - apiVersion: app.k8s.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: Application
    name: testrun
    uid: ce8ef389-7336-11e8-81de-42010a80012f
+ run_tester.py --namespace apptest-0816965a-e358-4eb3-9058-7ec8589ab63a --manifest /data/tester.yaml
Reading /data/tester.yaml
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
Error from server (NotFound): pods "testrun-tester" not found

INFO retrying
INFO Deleting namespace "apptest-0816965a-e358-4eb3-9058-7ec8589ab63a"
namespace "apptest-0816965a-e358-4eb3-9058-7ec8589ab63a" deleted

How to get secrets from within test

I need to fetch a password, which is stored in a secret.

Within my test container, I do this:

PASSWORD=$(kubectl get secret -n "{{ .Release.Namespace }}" "{{ .Release.Name }}-app-secrets" -o jsonpath='{.data.password}' | base64 --decode)

The password is needed in order to test the service previously deployed.

This fails, due to permissions:

Error from server (Forbidden): secrets "testrun-app-secrets" is forbidden: User "system:serviceaccount:apptest-751c11cf-087a-4274-95ff-2665b4947a99:default" cannot get secrets in the namespace "apptest-751c11cf-087a-4274-95ff-2665b4947a99": Unknown user "system:serviceaccount:apptest-751c11cf-087a-4274-95ff-2665b4947a99:default"

What's the recommendation in cases like this? Is some special configuration necessary to elevate privileges of the test container, or is another way of passing the generated password so that the kubectl get secret call isn't necessary?

Missing Step between installing crd and deploying application

Is there a step missing between getting setup w/ the tools, installing crd to the container here

and the next set of steps to actually deply...I assume the steps right after getting setup should lead to deploying the app here:

Add usage/validation to start_test.sh

This script: https://github.com/GoogleCloudPlatform/marketplace-k8s-app-tools/blob/master/scripts/start_test.sh

It's necessary to read the script lacking documentation to determine that certain parameters are required and that they are JSON. Incorrect calls result in cryptic failure messages that correspond to jq failures due to bad assumptions, for example jq: error (at <stdin>:1): null (null) and null (null) cannot be multiplied.

Currently not documented, I'm guessing this is intended as the local starting point for running test containers. It'd be helpful to add some info here differentiating what's a parameter and what's a test parameter.

Use printf instead of echo

echo -n or echo -e behaves inconsistently across shell environment. printf is more consistent.

This is particularly import for our .build/var/VARNAME targets. On Mac, the -n param ends up in content of the variable value file, making it different from the actual variable value everytime.

Use camelCase for property names

It's important to note that APP_INSTANCE_NAME and NAMESPACE are currently assumed. start.sh and deploy.sh need to extract these two special properties from the schema.

Schema Int cannot be encoded

Using an int parameter in the application's schema.yaml file, the deployer fails in expand_config.py

AttributeError: 'int' object has no attribute 'encode'
--
deployer | May 2, 2018, 12:56:19 PM | f.write(v.encode(encoding))
deployer | May 2, 2018, 12:56:19 PM | File "/bin/expand_config.py", line 109, in write_values
deployer | May 2, 2018, 12:56:19 PM | write_values(values, args.final_values_dir, args.encoding)
deployer | May 2, 2018, 12:56:19 PM | File "/bin/expand_config.py", line 62, in main
deployer | May 2, 2018, 12:56:19 PM | main()
deployer | May 2, 2018, 12:56:19 PM | File "/bin/expand_config.py", line 113, in <module>
deployer | May 2, 2018, 12:56:19 PM | Traceback (most recent call last):
deployer | May 2, 2018, 12:56:18 PM | + /bin/expand_config.py

On commit 2d33089

Auto-assign labels

Similar to setownership.py, we can have a utility to add app labels to resources provisioned by start.sh.

Add makefile improvements/checks

Add some sanity checks to makefile:

eg:

  • check if crd is created

after

make install-crd

check if the crd is ready

$ kubectl get crd
NAME                                        AGE
applications.marketplace.cloud.google.com   6s
  1. sanity check after GKE cluster creation
    (check cluster version, check if --enable-legacy-authorization was enabled or not)

I'll file the PR for this but leaving this as placeholder or other improvements to bundle.

Add name validation to APP_INSTANCE_NAME (to match UI)

I'm seeing errors like this:

The Application "foo-jxrYmyneo4jtest" is invalid: metadata.name: Invalid value: "foo-jxrYmyneo4jtest": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')

This derives from the fact that it's pretty natural to end up having APP_INSTANCE_NAME find its way into various templates and metadata: name inside of helm charts, but that's not permitted here.

Seems like an odd constraint though. Trivially fixable by using helm templates to lowercase everything, but it occurs to me that this results in a situation where a UI user is going to enter something into a box, and end up with resources that don't perfectly match their naming convention.

Will users be restricted from entering mixed-case things in here or are we just OK that resources in some cases can't match the names they provided that they hope to identify things by? Restrictions on what the user can enter might be nice, since otherwise mismatches between that and what can appear in kubernetes resources will be a validation step that every offeror must do.

Do not use `eval` in manifest expansion

The concern is that eval is running on user's supplied input.

One approach is to have the python script set up the environment and run envsubst within there, instead of outputting env variable statements for eval.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.