Comments (17)
Hi!
I'm not entirely familiar with what a user pool is, so I'll have to look into that. What does kubectl describe
say when you run it on the ngshare pod?
Also, for the installation instructions, there are a couple of config.yaml
examples in there listed near the top, under the Installing the Helm chart section. That said, even if you don't have such a config.yaml
it should've still autogenerated a token and should still function. I'll look into the user pool issue.
from ngshare.
I can't find any information on "kubernetes user pools". Do you have any documentation you can link to to help me understand them?
from ngshare.
Hi! Thanks for getting back to me!
Sorry if my terminology is a little confusing, but it's how it's referred to in the Zero to JupyterHub documentation (scroll down to the bottom). Basically you set up a pool of autoscaling nodes for the users to utilize and taint them so that users logging in get scheduled onto those nodes, rather than any nodes designated for other use.
kubectl describe returned this:
Name: ngshare-fall-jhub-ngshare-6b7b864d7b-wflnn
Namespace: fall-jhub-ngshare
Priority: 0
Node:
Labels: app.kubernetes.io/instance=ngshare-fall-jhub-ngshare
app.kubernetes.io/name=ngshare
pod-template-hash=6b7b864d7b
Annotations:
Status: Pending
IP:
IPs:
Controlled By: ReplicaSet/ngshare-fall-jhub-ngshare-6b7b864d7b
Containers:
ngshare:
Image: libretexts/ngshare:v0.5.3
Port: 8080/TCP
Host Port: 0/TCP
Limits:
cpu: 100m
memory: 128Mi
Requests:
cpu: 100m
memory: 128Mi
Liveness: http-get http://:http/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
JUPYTERHUB_SERVICE_NAME: ngshare
JUPYTERHUB_API_TOKEN: <set to the key 'token' in secret 'ngshare-token'> Optional: false
JUPYTERHUB_API_URL: http://hub:8081/hub/api
JUPYTERHUB_BASE_URL: /
JUPYTERHUB_SERVICE_PREFIX: /services/ngshare/
JUPYTERHUB_SERVICE_URL: http://0.0.0.0:8080/
Mounts:
/srv/ngshare from ngshare-pvc (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-jglpm (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
ngshare-pvc:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: ngshare-pvc
ReadOnly: false
default-token-jglpm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-jglpm
Optional: false
QoS Class: Guaranteed
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
Normal NotTriggerScaleUp 63s cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added): 1 node(s) had taints that the pod didn't
tolerate
Warning FailedScheduling 41s (x3 over 65s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
And thanks for that link but I do have a config.yaml, the thing that's a little confusing is that the installation example here has two configs, but it's not referred that way in the link you sent so I'm unsure how that config.yaml relates.
Thanks again!
from ngshare.
I should maybe also mention that I get warnings that "[Warning] pod has unbound immediate PersistentVolumeClaims"
Which I presume is because the ngshare pod isn't running and thus can't properly make the claims?
from ngshare.
The pod has unbound immediate PersistentVolumeClaims
issue is the problem here, since ngshare needs a piece of persistent storage to store its database. Do you have a persistent volume provisioner set up correctly? How are Z2JH's persistent volume claims satisfied?
from ngshare.
ah great, that is something I'm unsure about.
The documentation for installation says that it "assumes you already have a Kubernetes cluster with a persistent volume provisioner (which should be the case if you run Z2JH)."
I've followed the Zero to JupyterHub path already but it doesn't really seem to address PVCs following the basic steps, so I must be missing something there.
Would this section satisfy that potentially?
https://zero-to-jupyterhub.readthedocs.io/en/latest/customizing/user-storage.html#google-cloud
Thanks again!
from ngshare.
I think I know what's going on. Can you test by putting this in the ngshare's config.yaml
?
pvc:
accessModes:
- ReadWriteOnce
from ngshare.
Will do and will build from scratch again to make sure nothing is interfering!
from ngshare.
I think the issue is that in the helm chart, the PVC has access mode ReadWriteMany
, and GCEPersistentDisk volumes do not support that.
There really isn't a strong reason why we need to mount it ReadWriteMany
instead of ReadWriteOnce
(once upon a time we discussed running multiple ngshare instances at once but gave up on that idea, this is probably a remnant from that). We're probably gonna change it to be ReadWriteOnce
by default, but setting it explicitly like this is a good idea in the meantime.
Also, I see you're having some issues with running ngshare in a separate namespace. If you install it in the same namespace as Z2JH, it should just work out of the box, so I'd recommend that. If you really want to install it in a separate namespace, let me know and I can explain the documentation.
from ngshare.
So the good news is that it no longer pends! But unfortunately it's still not working 100%.
I ran kubectl describe on it and got these events:
Events:
Type Reason Age From Message
Warning FailedScheduling 2m27s (x2 over 2m30s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 ti
mes)
Normal NotTriggerScaleUp 2m27s cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is a
dded): 1 node(s) had taints that the pod didn't tolerate
Normal Scheduled 2m25s default-scheduler Successfully assigned fall-jhub-ngshare/ngshare-fall-jhub-ngsha
re-6b7b864d7b-qlhcq to gke-fall-jhub-ngshare-default-pool-8ea280cd-2bnt
Normal SuccessfulAttachVolume 2m19s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-ad71382e-1e67-42e
4-b1ae-12b4a6ecb9e8"
Normal Pulling 2m9s kubelet, gke-fall-jhub-ngshare-default-pool-8ea280cd-2bnt Pulling image "libretexts/ngshare:v0.5.3"
Normal Pulled 2m2s kubelet, gke-fall-jhub-ngshare-default-pool-8ea280cd-2bnt Successfully pulled image "libretexts/ngshare:v0.5.3"
Normal Created 119s kubelet, gke-fall-jhub-ngshare-default-pool-8ea280cd-2bnt Created container ngshare
Normal Started 119s kubelet, gke-fall-jhub-ngshare-default-pool-8ea280cd-2bnt Started container ngshare
Warning Unhealthy 116s kubelet, gke-fall-jhub-ngshare-default-pool-8ea280cd-2bnt Readiness probe failed: Get http://10.0.1.10:8080/healthz: dial
tcp 10.0.1.10:8080: connect: connection refused
Warning Unhealthy 116s kubelet, gke-fall-jhub-ngshare-default-pool-8ea280cd-2bnt Liveness probe failed: Get http://10.0.1.10:8080/healthz: dial
tcp 10.0.1.10:8080: connect: connection refused
from ngshare.
And looking at the ngshare service returns "500: Internal Server Error"
from ngshare.
Can you do a kubectl logs -n fall-jhub-ngshare ngshare-fall-jhub-ngshare-6b7b864d7b-wflnn
? (or whatever the pod name is now)
from ngshare.
Oh wait, I think I know why, since it's in a different namespace. What is your Z2JH namespace?
from ngshare.
Add this to your ngshare config.yaml
:
ngshare:
hub_api_url: http://hub.your-z2jh-namespace.svc.cluster.local:8081/hub/api
replacing your-z2jh-namespace
with the namespace.
Afterwards, you have to change your nbgrader_config.py
that you put in your singleuser image based on the new helm output.
from ngshare.
Hi, sure thing but it appears that it's now working thanks to your suggestions! When I go to the ngshare service I now get the "Hello, world! from ngshare" greeting!
It is in a namespace, yes, I've been doing everything in the fall-jhub-ngshare namespace. It's a bit messy with the naming.
Here's the full logs from the ngshare pod:
$ kubectl logs ngshare-fall-jhub-ngshare-6b7b864d7b-qlhcq --namespace $NAMESPACE
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> aa00db20c10a, Init
INFO [alembic.runtime.migration] Running upgrade aa00db20c10a -> 1921a169739b, Add file size
I actually already did change the namespace you mentioned, the tricky part was that I actually have to do it in advance because I am not building it locally, but rather pushing to dockerhub (kind of convoluted), but it seems to be working!
Thanks so much for your help, Kevin! I really appreciate it and am excited to get going with ngshare this semester!
from ngshare.
No problem! Hopefully it works properly. If not, feel free to open an issue here at any time.
from ngshare.
Hello. I'm deploying ngshare in the same namespace as hub. But still I'm facing the readiness probe: failed issue.
Here is the output of my describe pod command: The pod shows running. I'm using Azure Kubernetes service
Events:
Type Reason Age From Message
Warning FailedScheduling 3m18s default-scheduler 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 3m18s default-scheduler 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 3m16s default-scheduler Successfully assigned dev/ngshare-67d7587685-bmnht to aks-nodepool1-16227594-vmss000005
Normal SuccessfulAttachVolume 2m56s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-8dd1e83a-c2b6-48d7-8364-de80e316ed69"
Normal Pulled 2m43s kubelet Container image "libretexts/ngshare:v0.5.3" already present on machine
Normal Created 2m43s kubelet Created container ngshare
Normal Started 2m43s kubelet Started container ngshare
Warning Unhealthy 2m43s kubelet Readiness probe failed: Get "http://10.240.0.40:8080/healthz": dial tcp 10.240.0.40:8080: connect: connection refused
from ngshare.
Related Issues (20)
- When to release a new version HOT 3
- PostgreSQL support HOT 6
- Travis deployment failing HOT 3
- ngshare v0.5.2 deployment specification error HOT 6
- Isolate helm chart from ngshare app
- Documentation on Multiple Graders using ngshare? HOT 4
- Question:ngshare in Docker? HOT 2
- Cannot launch pod in GKE: ReadWriteMany not supported HOT 2
- ngshare blocks and restarts when accessed HOT 2
- Travis CI seems scuffed
- Readiness probe failed on deploying in same namespace HOT 3
- Checkpoint File Caused ngshare Error and No nbgrader Warnings
- Is this still be developed or supported? HOT 1
- Help: endpoint is deprecated HOT 6
- Compatibility with nbgrader 0.7.1 HOT 2
- Help:Adding Multiple Students to the Couse Api not accepting the body. HOT 3
- Proposed JupyterHub 3.0 Singleuser Dockerfile HOT 1
- ngshare is ignoring c.CourseDirectory.ignore? HOT 6
- 503 service unavailable. Your server appears to be down. Try restarting it from the hub HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ngshare.