Comments (2)
Initial run:
Summarizing 38 Failures:
[Fail] Deployment [It] deployment should create new pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:72
[Fail] Kubectl client Simple pod [It] should support exec
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/util.go:1138
[Fail] Horizontal pod autoscaling [It] [Autoscaling Suite] should scale from 1 pod to 3 pods and from 3 to 5 (scale resource: CPU)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:224
[Fail] SchedulerPredicates [BeforeEach] validates MaxPods limit number of pods that are allowed to run
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:172
[Fail] Kubectl client Simple pod [It] should support exec through an HTTP proxy
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:207
[Fail] Pods [It] should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:92
[Fail] Services [It] should be able to create a functioning NodePort service
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1176
[Fail] kube-ui [It] should check that the kube-ui instance is alive
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube-ui.go:57
[Fail] PreStop [It] should call prestop when killing a pod [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:141
[Fail] SchedulerPredicates [BeforeEach] validates resource limits of pods that are allowed to run [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:172
[Fail] Services [It] should be able to up and down services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:232
[Fail] Examples e2e [Example]ClusterDns [It] should create pod that uses dns [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/util.go:1138
[Fail] SchedulerPredicates [BeforeEach] validates that NodeSelector is respected [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:172
[Fail] KubeletManagedEtcHosts [It] should test kubelet managed /etc/hosts file
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_etc_hosts.go:137
[Fail] KubeProxy [It] should test kube-proxy
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:255
[Fail] ServiceLoadBalancer [It] should support simple GET on Ingress ips
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/serviceloadbalancers.go:276
[Fail] Horizontal pod autoscaling [It] [Autoscaling Suite] should scale from 5 pods to 3 pods and from 3 to 1 (scale resource: CPU)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:224
[Fail] Reboot [It] each node by ordering clean reboot and ensure they function upon restart
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:119
[Fail] Reboot [It] each node by ordering unclean reboot and ensure they function upon restart
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:119
[Fail] Services [It] should release NodePorts on delete
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:754
[Fail] Port forwarding With a server that expects a client request [It] should support a client that connects, sends no data, and disconnects
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:123
[Fail] PrivilegedPod [It] should test privileged pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/privileged.go:178
[Fail] Deployment [It] deployment should delete old pods and create new ones
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:166
[Fail] Port forwarding With a server that expects no client request [It] should support a client that connects, sends no data, and disconnects
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:232
[Fail] SSH [It] should SSH to all nodes and run commands
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:71
[Fail] Port forwarding With a server that expects a client request [It] should support a client that connects, sends data, and disconnects
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:195
[Fail] Kubectl client Simple pod [It] should support port-forward
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:429
[Panic!] Resource usage of system containers [It] should not exceed expected amount.
/usr/src/go/src/runtime/panic.go:387
[Fail] Services [It] should serve identically named services in different namespaces on different load-balancers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1176
[Fail] Reboot [It] each node by switching off the network interface and ensure they function upon switch on
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:119
[Fail] Reboot [It] each node by triggering kernel panic and ensure they function upon restart
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:119
[Fail] Deployment [It] deployment should scale up and down in the right order
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:247
[Fail] Reboot [It] each node by dropping all inbound packets for a while and ensure they function afterwards
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:119
[Fail] Services [It] should be able to change the type and nodeport settings of a service
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1176
[Fail] Kubelet experimental resource usage tracking [It] over 30m0s with 50 pods per node.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:66
[Fail] Reboot [It] each node by dropping all outbound packets for a while and ensure they function afterwards
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:119
[Fail] DNS [It] should provide DNS for services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:237
[Fail] DNS [It] should provide DNS for the cluster
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:199
Ran 121 of 175 Specs in 9145.717 seconds
FAIL! -- 83 Passed | 38 Failed | 3 Pending | 51 Skipped --- FAIL: TestE2E (9145.81s)
from kubernetes-anywhere.
Second run, 63 failed and 85 passed:
Summarizing 36 Failures:
[Fail] Port forwarding With a server that expects a client request [It] should support a client that connects, sends data, and disconnects
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:195
[Fail] KubeletManagedEtcHosts [It] should test kubelet managed /etc/hosts file
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_etc_hosts.go:137
[Fail] Port forwarding With a server that expects a client request [It] should support a client that connects, sends no data, and disconnects
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:123
[Fail] Horizontal pod autoscaling [It] [Autoscaling Suite] should scale from 5 pods to 3 pods and from 3 to 1 (scale resource: CPU)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:224
[Fail] Services [It] should be able to change the type and nodeport settings of a service
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1176
[Fail] Services [It] should release NodePorts on delete
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:754
[Fail] Port forwarding With a server that expects no client request [It] should support a client that connects, sends no data, and disconnects
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:232
[Fail] Kubectl client Simple pod [It] should support port-forward
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:429
[Fail] Services [It] should be able to up and down services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:232
[Panic!] Resource usage of system containers [It] should not exceed expected amount.
/usr/src/go/src/runtime/panic.go:387
[Fail] Reboot [It] each node by ordering clean reboot and ensure they function upon restart
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:119
[Fail] Deployment [It] deployment should delete old pods and create new ones
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:166
[Fail] kube-ui [It] should check that the kube-ui instance is alive
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube-ui.go:57
[Fail] DNS [It] should provide DNS for services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:237
[Fail] Reboot [It] each node by ordering unclean reboot and ensure they function upon restart
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:119
[Fail] Kubectl client Simple pod [It] should support exec through an HTTP proxy
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:207
[Fail] Pods [It] should be restarted with a docker exec "cat /tmp/health" liveness probe [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:92
[Fail] Kubelet experimental resource usage tracking [It] over 30m0s with 50 pods per node.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:66
[Fail] Examples e2e [Example]ClusterDns [It] should create pod that uses dns [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/util.go:1138
[Fail] SSH [It] should SSH to all nodes and run commands
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ssh.go:71
[Fail] Reboot [It] each node by dropping all outbound packets for a while and ensure they function afterwards
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:119
[Fail] Horizontal pod autoscaling [It] [Autoscaling Suite] should scale from 1 pod to 3 pods and from 3 to 5 (scale resource: CPU)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:224
[Fail] Services [It] should serve identically named services in different namespaces on different load-balancers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1176
[Fail] Reboot [It] each node by triggering kernel panic and ensure they function upon restart
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:119
[Fail] ServiceLoadBalancer [It] should support simple GET on Ingress ips
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/serviceloadbalancers.go:276
[Fail] PrivilegedPod [It] should test privileged pod
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/privileged.go:178
[Fail] Services [It] should be able to create a functioning NodePort service
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1176
[Fail] Deployment [It] deployment should scale up and down in the right order
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:247
[Fail] Kubectl client Guestbook application [It] should create and stop a working application [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1010
[Fail] Deployment [It] deployment should create new pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:72
[Fail] Reboot [It] each node by switching off the network interface and ensure they function upon switch on
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:119
[Fail] KubeProxy [It] should test kube-proxy
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:255
[Fail] Reboot [It] each node by dropping all inbound packets for a while and ensure they function afterwards
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/reboot.go:119
[Fail] Kubectl client Simple pod [It] should support exec
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/util.go:1138
[Fail] DNS [It] should provide DNS for the cluster
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:199
[Fail] PreStop [It] should call prestop when killing a pod [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:141
Ran 121 of 175 Specs in 9942.374 seconds
FAIL! -- 85 Passed | 36 Failed | 3 Pending | 51 Skipped --- FAIL: TestE2E (9942.40s)
from kubernetes-anywhere.
Related Issues (20)
- the OVA's disk storage is not enough for worker node in vsphere HOT 2
- Usage of non-public gcr.io/google-containers breaks default vsphere deployment HOT 1
- Deploy K8s cluster, with static IP address on vSphere HOT 4
- Create a SECURITY_CONTACTS file. HOT 1
- RUNTIME ERROR during make deploy HOT 2
- Missing CONTRIBUTING.md file
- Fail to deploy k8s cluster 1.8 on vSphere HOT 6
- RBAC and missing default roles HOT 8
- Cluster unavailable after sitting idle over night HOT 4
- Cluster unavailable after being left alone over night HOT 9
- Make Deploy Runtime Error (from previous ticket still not working) HOT 1
- Make Deploy Runtime Error HOT 14
- improve the k-a versioning in the test-infra kubeadm runner HOT 14
- indicate the status of this repo HOT 8
- kubernetes kubeadm-gce-* tests failing due to missing node configuration HOT 4
- Use tide for PR merging HOT 1
- Terraform crashes HOT 5
- Loop Validation: Expected HOT 4
- Issue when defining vsphere cloud provider config HOT 1
- Broken link:Branch-based links in projects are more likely to fail HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from kubernetes-anywhere.