cci-moc / openshift-acct-mgt Goto Github PK
View Code? Open in Web Editor NEWREST API for managing Users, Namespaces and ResourceQuotas on OpenShift
REST API for managing Users, Namespaces and ResourceQuotas on OpenShift
openshift-acct-mgt/moc_openshift.py
Line 96 in ea8a4cd
In the below code block identities
is an undefined variable, i'm assuming this should be the string "identities"
.
def useridentitymapping_exists(self, user_name, id_provider, id_user):
user = self.get_user(user_name)
if (
not (user.status_code == 200 or user.status_code == 201)
and user[identities]
):
id_str = "{}:{}".format(id_provider, id_user)
for identity in user[identities]:
if identity == id_str:
return True
return False
Since the identity provider is currently stored in the configmap, it would be beneficial to the end user to know what the identity provider is mainly in case the identity provider is not in the configmap.
Client code of moc_openshift.py
expects a response to have a status_code
attribute. E.g., in wsgi.py
in create_moc_rolebindings
:
@APP.route("/users/<user_name>/projects/<project_name>/roles/<role>", methods=["PUT"])
@AUTH.login_required
def create_moc_rolebindings(project_name, user_name, role):
# role can be one of Admin, Member, Reader
shift = get_openshift()
result = shift.update_user_role_project(project_name, user_name, role, "add")
if result.status_code == 200 or result.status_code == 201:
return Response(
response=result.response, status=200, mimetype="application/json",
)
But if you look at the update_user_role_project
function in moc_openshift.py
, the return value has no status_code
attribute:
[...]
return Response(
response=json.dumps(
{
"msg": f"Error: Invalid role, {role} is not one of 'admin', 'member' or 'reader'"
}
),
status=400,
mimetype="application/json",
)
[...]
return Response(
response=json.dumps(
{
"msg": f"rolebinding created ({user},{project_name},{role})"
}
),
status=200,
mimetype="application/json",
)
Etc.
There is only a status
attribute. This will result in an AttributeError
exception when code attempts to access the missing status_code
attribute.
SCIM (System for Cross-Domain Identity Management, http://www.simplecloud.info) is an API standard for managing user identities. It provides definitions for Users and Groups under the core schema, and can be extended to support other resource types (eg. projects).
It would be beneficial for us to make the API for the microservice conform to the SCIM standard. This would allow interoperability with other services that implement the API.
There are 2 ways this interoperability could provide beneficial.
ColdFront --> (other) SCIM service
This could allow ColdFront to provision resources in services that implement the API (AWS, Slack, GitHub, Azure, etc) without having to implement a new driver or microservice for each. ColdFront could investigate what ResourceTypes are advertised by the interface and provide appropriate fallbacks depending on the level of API implementation. The devil is in the details, but at the very least it would be able to provision users into that service.
(other) SCIM client --> OpenShift Microservice
This could allow other SCIM client (identity providers, eg Active Directory, etc) to provision users into OpenShift. This would make this microservice immediately much more useful than just for us, potentially bringing other contributors.
As I mentioned above, the devil is in the details and while the API provides Users and Groups as part of the core schema standard which pretty much everyone implements, the rest is up to the extensions to define, limiting out of the box interoperability to pretty much just that.
Fix the types returned by update and create role bindings as they return more than Response.
When setting abstract function instead of specifying them as:
class MocOpenShift(metaclass=abc.ABCMeta):
@abc.abstractmethod -> Response
def create_rolebindings(self, project_name, user_name, role)->Response:
return Response()
@abc.abstractmethod -> Response
def update_rolebindings(self, project_name, role, rolebindings_json)->Response:
return Response()
Had to use:
class MocOpenShift(metaclass=abc.ABCMeta):
@abc.abstractmethod
def create_rolebindings(self, project_name, user_name, role):
return
@abc.abstractmethod
def update_rolebindings(self, project_name, role, rolebindings_json):
return
Will also fix (#10)
When first starting out with Kubernetes, people are probably going to be submitting workloads that end up in the "BestEffort" quota class, such as:
apiVersion: v1
kind: Pod
metadata:
name: sleeper-besteffort
namespace: default
spec:
containers:
- args:
- sleep
- "1800"
image: docker.io/alpine:latest
name: sleeper
This makes it difficult for us to effectively manage resources; the only quota that can be set for BestEffort class pods is a limit on the number of pods. If we want to better manage resources like memory we would like to prevent people from creating BestEffort pods.
There are two ways to do this:
We can just set the quota for BestEffort pods to 0. Probably solved! But now when someone attempts to submit a simple workload it fails, with a message like:
Error from server (Forbidden): error when creating "STDIN": pods "sleeper-besteffort" is forbidden: exceeded quota: besteffort-pods, requested: pods=1, used: pods=0, limited: pods=0
Alternately, we can add a LimitRange to the project, which would automatically apply limits (and requests) to pods that don't provide them explicitly, resulting in it having a qos class of Burstable or Guaranteed.
I think the second option is going to be friendlier to novice users, since it allows them to copy-and-paste manifests that they have found in online documentation and have them work as expected.
Consider refactoring/splitting them into smaller classes - for now am disabling the pylint error (R0904: Too many public methods (22/20) (too-many-public-methods))
This is a subtask of CCI-MOC/adjutant-moc#15. Before we can integrate the testing into the GitHub CI, we should automate the process of setting up the appropriate environment locally. Then we can think of having that be done through GitHub.
CCI-MOC/openshift-acct-mgt
SSIA
Get the identity provider from a ConfigMap to eliminate it being hard coded.
How to do a production deployment
how to do a blue/green deployment to a production environment in order to test the new deployment
how 2 or more developers can share a CRC instance and/or staging.
Integrate quota support
test quota support
Definition of Done:
Code for micro service and testing the micro service is checked into GitHub and and merged with the master.
Based on a comment on #79
You could also eliminate the "base" and "coefficient in all of these as they are used for the quota multiplier.
In
openshift-acct-mgt/moc_openshift.py
Line 632 in b1cec97
overall_status_code
before it has been defined.Both DELETE /projects/<project_name>/quota
and PUT /projects/<project_name>/quota
claim to return JSON data (the response has Content-type: application/json
, but they are in fact returning plain text:
$ curl -k -u admin:pass https://onboarding-onboarding.apps-crc.testing/projects/test-project/quota -X DELETE
All quota from test-project successfully deleted
And:
$ curl -k -u admin:pass https://onboarding-onboarding.apps-crc.testing/projects/test-project/quota -X PUT -d '{"Quota": {"QuotaMultiplier": 1}}'
MOC Quotas Replaced
When using the Microshift image on Ubuntu, the container requires to be started twice before routes start working. We're hitting this issue in our CI. Collect some data and file an upstream bug.
Rolebindings require bind a role defined in a roleRef
element to subjects in a subjects
element (which may be users, groups, serviceaccounts, etc). The userNames
field is deprecated and should not be used. See e.g. https://docs.openshift.com/container-platform/4.9/rest_api/role_apis/rolebinding-authorization-openshift-io-v1.html#specification.
I am currently using the API to fetch the needed configmaps as this makes the deployment easier. I have been noticing that if the configmaps do not exist, they tend to cause errors that causes the service to fail. I would prefer if the service was able to recover from this and just wait until the new configmaps are loaded this way a meaningful error message could be provided when functions relying on those configmaps are executed.
There may be a need to support multiple identities from different identity providers in the future.
Currently, we are only supporting 1 identity per user. This was not always the case as we had several users using OpenShift3.x who had multiple identities - and sometimes multiple identity providers attached to one account. It was often the case that a particular user wouldn't know which sso identity he was logged in as - so they got multiple identities.
Support would be rather straight forward to implement multiple identities with one identity provider being in the ConfigMap.
Support for multiple identity providers is a bit more complicated as we have effectively hidden this from the end user.
Currently the test code will test using basic authentication with the micro server, however the kustomize yaml build does not, nor do the instructions outside of the comment in the acct-mgt-test.py indicate this.
Add tests that test each test case, as opposed to just having functions test all of the test cases for a subsection.
For example:
1) one function that tests you cannot a delete a non-existing namespace
2) one function that tests you can delete a an existing namespace
The logic could use cleaning up. It works, but it is not easiest to read.
Definition of done is that the micro service is in use
We need to deploy openshift-acct-mgt on ocp-staging so we can do some final testing and iron out any possible showstopped bugs before going live.
EDIT: renamed ocp-prod to ocp-staging, since they're similar enough and that was the direction I went with.
The main README.md should include a reference to the files in ./tools/README.md
./docs/develop.md to indicate how to deploy CRC containers and how build and test the micro server
In reviewing #2, I noticed that in many places in the code, you're parsing the output of oc
using regular expression matches. This is fragile and prone to error in the event that the output format changes, or there is unexpected output, etc.
The oc
command can generate json
output with the -o json
option, which gives you something that is designed to be machine parseable. Consider for example:
#!/usr/bin/python3
import json
import subprocess
import time
def wait_until_containers_are_running(namespace, podname):
'''Check that all containers in a pod are running.'''
while True:
out = subprocess.check_output([
'oc', '-n', namespace, '-o', 'json',
'get', 'pod/{}'.format(podname),
])
pod = json.loads(out)
if all(status['state'].get('running')
for status in pod['status']['containerStatuses']):
break
time.sleep(5)
This command will wait until all containers in a pod are running. It will raise an exception if the named pod does not exist.
You could apply similar logic to all instances in the code where you are checking the output of the oc
command.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.