Git Product home page Git Product logo

openshift-acct-mgt's People

Contributors

knikolla avatar larsks avatar ljmcgann avatar rob-baron avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

openshift-acct-mgt's Issues

identities is an undefined variable

and user[identities]

In the below code block identities is an undefined variable, i'm assuming this should be the string "identities".

def useridentitymapping_exists(self, user_name, id_provider, id_user):
        user = self.get_user(user_name)
        if (
            not (user.status_code == 200 or user.status_code == 201)
            and user[identities]
        ):
            id_str = "{}:{}".format(id_provider, id_user)
            for identity in user[identities]:
                if identity == id_str:
                    return True
        return False

Inconsistent use of 'status' vs 'status_code' results in errors

Client code of moc_openshift.py expects a response to have a status_code attribute. E.g., in wsgi.py in create_moc_rolebindings:

@APP.route("/users/<user_name>/projects/<project_name>/roles/<role>", methods=["PUT"])
@AUTH.login_required
def create_moc_rolebindings(project_name, user_name, role):
    # role can be one of Admin, Member, Reader
    shift = get_openshift()
    result = shift.update_user_role_project(project_name, user_name, role, "add")
    if result.status_code == 200 or result.status_code == 201:
        return Response(
            response=result.response, status=200, mimetype="application/json",
        )

But if you look at the update_user_role_project function in moc_openshift.py, the return value has no status_code attribute:

[...]
            return Response(
                response=json.dumps(
                    {
                        "msg": f"Error: Invalid role, {role} is not one of 'admin', 'member' or 'reader'"
                    }
                ),
                status=400,
                mimetype="application/json",
            )
[...]
                    return Response(
                        response=json.dumps(
                            {
                                "msg": f"rolebinding created ({user},{project_name},{role})"
                            }
                        ),
                        status=200,
                        mimetype="application/json",
                    )

Etc.

There is only a status attribute. This will result in an AttributeError exception when code attempts to access the missing status_code attribute.

Investigate a possible implementation of the SCIM API

SCIM (System for Cross-Domain Identity Management, http://www.simplecloud.info) is an API standard for managing user identities. It provides definitions for Users and Groups under the core schema, and can be extended to support other resource types (eg. projects).

Why?

It would be beneficial for us to make the API for the microservice conform to the SCIM standard. This would allow interoperability with other services that implement the API.

There are 2 ways this interoperability could provide beneficial.

ColdFront --> (other) SCIM service
This could allow ColdFront to provision resources in services that implement the API (AWS, Slack, GitHub, Azure, etc) without having to implement a new driver or microservice for each. ColdFront could investigate what ResourceTypes are advertised by the interface and provide appropriate fallbacks depending on the level of API implementation. The devil is in the details, but at the very least it would be able to provision users into that service.

(other) SCIM client --> OpenShift Microservice
This could allow other SCIM client (identity providers, eg Active Directory, etc) to provision users into OpenShift. This would make this microservice immediately much more useful than just for us, potentially bringing other contributors.

Drawbacks

As I mentioned above, the devil is in the details and while the API provides Users and Groups as part of the core schema standard which pretty much everyone implements, the rest is up to the extensions to define, limiting out of the box interoperability to pretty much just that.

fix types returned by update and create role bindings

Fix the types returned by update and create role bindings as they return more than Response.

When setting abstract function instead of specifying them as:

    class MocOpenShift(metaclass=abc.ABCMeta):
        @abc.abstractmethod -> Response
        def create_rolebindings(self, project_name, user_name, role)->Response:
            return Response()

        @abc.abstractmethod -> Response
        def update_rolebindings(self, project_name, role, rolebindings_json)->Response:
            return Response()

Had to use:

    class MocOpenShift(metaclass=abc.ABCMeta):
        @abc.abstractmethod
        def create_rolebindings(self, project_name, user_name, role):
            return 

        @abc.abstractmethod
        def update_rolebindings(self, project_name, role, rolebindings_json):
            return

Will also fix (#10)

Managing BestEffort pods

When first starting out with Kubernetes, people are probably going to be submitting workloads that end up in the "BestEffort" quota class, such as:

apiVersion: v1
kind: Pod
metadata:
  name: sleeper-besteffort
  namespace: default
spec:
  containers:
  - args:
    - sleep
    - "1800"
    image: docker.io/alpine:latest
    name: sleeper

This makes it difficult for us to effectively manage resources; the only quota that can be set for BestEffort class pods is a limit on the number of pods. If we want to better manage resources like memory we would like to prevent people from creating BestEffort pods.

There are two ways to do this:

  • We can just set the quota for BestEffort pods to 0. Probably solved! But now when someone attempts to submit a simple workload it fails, with a message like:

    Error from server (Forbidden): error when creating "STDIN": pods "sleeper-besteffort" is forbidden: exceeded quota: besteffort-pods, requested: pods=1, used: pods=0, limited: pods=0

  • Alternately, we can add a LimitRange to the project, which would automatically apply limits (and requests) to pods that don't provide them explicitly, resulting in it having a qos class of Burstable or Guaranteed.

I think the second option is going to be friendlier to novice users, since it allows them to copy-and-paste manifests that they have found in online documentation and have them work as expected.

refactor MocOpenShift and MocOpenShiftv4x

Consider refactoring/splitting them into smaller classes - for now am disabling the pylint error (R0904: Too many public methods (22/20) (too-many-public-methods))

Automate running tests locally with OpenShift

This is a subtask of CCI-MOC/adjutant-moc#15. Before we can integrate the testing into the GitHub CI, we should automate the process of setting up the appropriate environment locally. Then we can think of having that be done through GitHub.

  • Install CRC
  • Script the above
  • Deploy CCI-MOC/openshift-acct-mgt
  • Script the above
  • Run tests
  • Script the above

Improvements to developer documentation

  1. How to do a production deployment

  2. how to do a blue/green deployment to a production environment in order to test the new deployment

  3. how 2 or more developers can share a CRC instance and/or staging.

Quota operations claim to return JSON but do not

Both DELETE /projects/<project_name>/quota and PUT /projects/<project_name>/quota claim to return JSON data (the response has Content-type: application/json, but they are in fact returning plain text:

$ curl -k  -u  admin:pass https://onboarding-onboarding.apps-crc.testing/projects/test-project/quota  -X DELETE
All quota from test-project successfully deleted

And:

$ curl -k  -u  admin:pass https://onboarding-onboarding.apps-crc.testing/projects/test-project/quota  -X PUT -d '{"Quota": {"QuotaMultiplier": 1}}'
MOC Quotas Replaced

Report Microshift issue upstream

When using the Microshift image on Ubuntu, the container requires to be started twice before routes start working. We're hitting this issue in our CI. Collect some data and file an upstream bug.

Check for the existence of a configmap before fetching data from it.

I am currently using the API to fetch the needed configmaps as this makes the deployment easier. I have been noticing that if the configmaps do not exist, they tend to cause errors that causes the service to fail. I would prefer if the service was able to recover from this and just wait until the new configmaps are loaded this way a meaningful error message could be provided when functions relying on those configmaps are executed.

Support for multiple identities may eventually be necessary

There may be a need to support multiple identities from different identity providers in the future.

Currently, we are only supporting 1 identity per user. This was not always the case as we had several users using OpenShift3.x who had multiple identities - and sometimes multiple identity providers attached to one account. It was often the case that a particular user wouldn't know which sso identity he was logged in as - so they got multiple identities.

Support would be rather straight forward to implement multiple identities with one identity provider being in the ConfigMap.

Support for multiple identity providers is a bit more complicated as we have effectively hidden this from the end user.

add test that only test individual test cases

Add tests that test each test case, as opposed to just having functions test all of the test cases for a subsection.

For example:
1) one function that tests you cannot a delete a non-existing namespace
2) one function that tests you can delete a an existing namespace

Deploy openshift-acct-mgt on ocp-staging

We need to deploy openshift-acct-mgt on ocp-staging so we can do some final testing and iron out any possible showstopped bugs before going live.

EDIT: renamed ocp-prod to ocp-staging, since they're similar enough and that was the direction I went with.

Improve openshift-acct-mgt documentation

The main README.md should include a reference to the files in ./tools/README.md
./docs/develop.md to indicate how to deploy CRC containers and how build and test the micro server

Don't try parsing default `oc` output

In reviewing #2, I noticed that in many places in the code, you're parsing the output of oc using regular expression matches. This is fragile and prone to error in the event that the output format changes, or there is unexpected output, etc.

The oc command can generate json output with the -o json option, which gives you something that is designed to be machine parseable. Consider for example:

#!/usr/bin/python3


import json
import subprocess
import time


def wait_until_containers_are_running(namespace, podname):
    '''Check that all containers in a pod are running.'''

    while True:
        out = subprocess.check_output([
            'oc', '-n', namespace, '-o', 'json',
            'get', 'pod/{}'.format(podname),
        ])

        pod = json.loads(out)

        if all(status['state'].get('running')
               for status in pod['status']['containerStatuses']):
            break

        time.sleep(5)

This command will wait until all containers in a pod are running. It will raise an exception if the named pod does not exist.

You could apply similar logic to all instances in the code where you are checking the output of the oc command.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.