Git Product home page Git Product logo

converged-edge-experience-kits's People

Contributors

amr-mokhtar avatar cezaryxmarczak avatar cjnolan avatar cuizhaoyue avatar damiankopyto avatar gillukax avatar groclawski avatar i-karina avatar i-kwilk avatar ipatrykx avatar jakubrym avatar jiangzhg avatar jkossak avatar kamilpoleszczuk avatar konradja avatar lukaszxlesiecki avatar mariuszszczepanik avatar mateusz-szelest avatar mcping avatar michalxkochel avatar mmx111 avatar patrykdiak avatar patrykxmatuszak avatar sheminx avatar skonefal avatar stephenjameson avatar sunhui2980 avatar sushillakra avatar tomaszwesolowski avatar tongtongxiaopeng1979 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

converged-edge-experience-kits's Issues

Error in cluster join at Edge worker Node

Hi,
My setup is such:

  • Deployer on VM1, Controller Node on VM2 with 2 NICs, Edge Node on a physical server

  • Ansible playbook is successful on the controller node.

  • While running the Ansible playbook is run for edge node installation, I get the following error:
    I0709 20:10:42.641753 31263 join.go:441] [preflight] Discovering cluster-info
    I0709 20:10:42.641784 31263 token.go:78] [discovery] Created cluster-info discovery client, requesting info from "192.168.122.30:6443"
    I0709 20:10:52.642500 31263 token.go:215] [discovery] Failed to request cluster-info, will try again: Get https://192.168.122.30:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: context deadline exceeded

  • The reason I see is that IP address 192.168.122.30 is of the second NIC of the controller, whereas it should be picked the address of the first NIC of the controller, which is the address provided in inventory.ini file.

Kindly suggest how the right NIC IP address should be selected on the controller.

nfd-master node not in running state when setup is run on VMs

Hi,
I am new to Openness and trying to create an edge controller & network edge setup using VMs.
I have created another VM to run the deployment script. My problems are:

  1. Can this setup be run using CentOS7.6 VMs?
  2. On the controller node, I can't login from landing_ui page. I have read somewhere, that this login page is applicable only for on-premise deployment. Is this correct.
  3. On the controller node, I ran kubectl describer pods nfd-master -n openness and the output is:
    "Failed to create pod sandbox.... networkPlugin cni failed to setup pod nfd-master.... /run/openvswitch/kube-ovn-daemon.sock: connect: no such file or directory

I am badly stuck with the above error, any help will be appreciated.

RMD bootstraping fails.

default intel-rmd-operator-5656d64798-nv9ql 1/1 Running 0 172m 10.16.0.23 node01
default rmd-node-agent-node01 1/1 Running 0 3h15m 10.16.0.22 node01
default rmd-node01 0/1 CrashLoopBackOff 37 172m 10.16.0.18 node01

Error logs of rmd-node01
[root@controller01 ~]# kubectl logs rmd-node01
Resctrl mount or Mount path failed: false

[root@node01 ~]# ls /sys/fs/
bpf btrfs cgroup ext4 fuse pstore resctrl selinux xfs
[root@node01 ~]#

[root@node01 ~]# cat /proc/filesystems | grep resctrl
nodev resctrl
[root@node01 ~]#

  1. Additionally change in golang version upgrade to go1.13.14 in openness configuration required.
  2. I also see mismatch in code vs rmd-pod.yaml.
    /etc/rmd/rmd.toml: sysresctrl = "/sys/fs/resctrl"

rmd/src/utils/proc/proc.go

63 // IsResctrlMounted Checks if ResCtrl is mounted and if the path is valid
64 func IsResctrlMounted(flag string) bool {
65 f, err := os.Open(Mounts)

263 //check if resctrl is mounted
264 isresctrlenabled := IsResctrlMounted(ResctrlPath)
265 if isresctrlenabled != true {
266 fmt.Println("Resctrl mount or Mount path failed:", isresctrlenabled)

rmd-pod.yaml

volumeMounts:
- mountPath: /sys/fs
name: resctrl
volumes:

  • name: resctrl
    hostPath:
    path: /sys/fs

Mounting resctrl didn't help.
mount -t resctrl resctrl /sys/fs/resctrl

https://github.com/open-ness/openness-experience-kits/blob/fa5039031d1e639c1695d500d26995aed07d650f/network_edge.yml#L107

When can we get stable openness 20.03 code base?

Hi Team,

Can anyone please let us know when can we get stable openness 20.03 code base?
As I posted in forum we are unable to deploy openness 20.03. Please look into issue link fro more details.
#51

Thanks & Regards,
Devika

Error in deploying SampleApp

Hi,
I was trying to deploy a SampleApp and came across the following error in the producer pod:

Reason: UnexpectedAdmissionError

Message: Pod Allocate failed due to failed to write checkpoint file "kubelet_internal_checkpoint": mkdir /var: file exists, which is unexpected

broken tuned_packages links

It was observed that tuned package download links are broken as reported in #13 -- thanks to @ashishsaxenahsc.

The tuned_packages links can be updated to the following files:

  1. openness-experience-kits/group_vars/edgenode_group.yml
  2. openness-experience-kits/host_vars/_example_variables.yml
  3. openness-experience-kits/roles/machine_setup/configure_tuned/defaults/main.yml

Update:

- - http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/tuned-2.11.0-5.el7_7.1.noarch.rpm
- - http://linuxsoft.cern.ch/scientific/7x/x86_64/updates/fastbugs/tuned-profiles-realtime-2.11.0-
+ - http://linuxsoft.cern.ch/scientific/7x/x86_64/os/Packages/tuned-2.11.0-8.el7.noarch.rpm
+ - http://linuxsoft.cern.ch/scientific/7x/x86_64/os/Packages/tuned-profiles-realtime-2.11.0-8.el7.noarch.rpm

Error in running Ansible playbook for the edge node

Hi,
I could install the controller on a VM, however, edge node installation on physical server has failed with the following error:
TASK [kubernetes/cni/kubeovn/worker : try to get ovs-ovn execution logs] *******************************************************************************************
task path: /root/openness-experience-kits/roles/kubernetes/cni/kubeovn/worker/tasks/main.yml:75
fatal: [node01 -> 30.30.30.22]: FAILED! => {
"changed": false,
"cmd": "set -o pipefail && kubectl logs -n kube-system $(kubectl get pods -n kube-system -o custom-columns=NAME:.metadata.name --field-selector spec.nodeName=node01 | grep ovs-ovn)\n",
"delta": "0:00:00.444066",
"end": "2020-07-11 12:03:37.659635",
"rc": 1,
"start": "2020-07-11 12:03:37.215569"
}

STDERR:

Error from server: Get https://30.30.30.11:10250/containerLogs/kube-system/ovs-ovn-645b9/openvswitch: dial tcp 30.30.30.11:10250: connect: connection refused

If there is a known solution or a workaround, please let us know.

ErrImageNeverPull while deploying sample app

Hi,
I am trying to deploy a sampleApp following the steps provided on the openness site. I come across ErrImageNeverPull error. Now that we have deployed OpenNESS successfully, it will help if we can get on call to discuss few things. Output of Kubectl describe is provided below:

kubectl describe pods producer-685fcbc569-swc8r

Name: producer-685fcbc569-swc8r
Namespace: default
Priority: 0
Node: node01/146.0.237.30
Start Time: Tue, 21 Jul 2020 12:58:43 +0200
Labels: app=producer
pod-template-hash=685fcbc569
Annotations: ovn.kubernetes.io/allocated: true
ovn.kubernetes.io/cidr: 10.16.0.0/16
ovn.kubernetes.io/gateway: 10.16.0.1
ovn.kubernetes.io/ip_address: 10.16.0.16
ovn.kubernetes.io/logical_switch: ovn-default
ovn.kubernetes.io/mac_address: 0e:4f:1d:10:00:11
Status: Pending
IP: 10.16.0.16
IPs:
IP: 10.16.0.16
Controlled By: ReplicaSet/producer-685fcbc569
Containers:
producer:
Container ID:
Image: producer:1.0
Image ID:
Ports: 80/TCP, 443/TCP
Host Ports: 0/TCP, 0/TCP
State: Waiting
Reason: ErrImageNeverPull
Ready: False
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xqj7r (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-xqj7r:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-xqj7r
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node-role.kube-ovn/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Warning ErrImageNeverPull 2m47s (x5088 over 18h) kubelet, node01 Container image "producer:1.0" is not present with pull policy of Never

Out of Kubectl get pods -o wide -A is shared below:

[root@controller ~]# kubectl get pods -o wide -A
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cdi cdi-apiserver-885758cc4-f8g74 1/1 Running 0 19h 10.16.0.8 node01
cdi cdi-deployment-5bdcc85d54-f6cs5 1/1 Running 0 19h 10.16.0.24 node01
cdi cdi-operator-76b6694845-dzcmq 1/1 Running 0 20h 10.16.0.9 node01
cdi cdi-uploadproxy-89cf96777-fk296 1/1 Running 0 19h 10.16.0.25 node01
default producer-685fcbc569-swc8r 0/1 ErrImageNeverPull 0 18h 10.16.0.16 node01
kube-system coredns-66bff467f8-snzlh 1/1 Running 0 21h 10.16.0.3 controller
kube-system coredns-66bff467f8-vvtlw 1/1 Running 0 21h 10.16.0.2 controller
kube-system descheduler-cronjob-1595395440-fd2qr 0/1 Completed 0 6m5s 10.16.0.26 node01
kube-system descheduler-cronjob-1595395560-rtksf 0/1 Completed 0 4m5s 10.16.0.34 node01
kube-system descheduler-cronjob-1595395680-cq569 0/1 Completed 0 2m4s 10.16.0.29 node01
kube-system descheduler-cronjob-1595395800-8jg5d 0/1 ContainerCreating 0 4s node01
kube-system etcd-controller 1/1 Running 0 21h 134.119.213.95 controller
kube-system kube-apiserver-controller 1/1 Running 0 21h 134.119.213.95 controller
kube-system kube-controller-manager-controller 1/1 Running 0 21h 134.119.213.95 controller
kube-system kube-ovn-cni-h5p5m 1/1 Running 5 20h 134.119.213.95 controller
kube-system kube-ovn-cni-xjdzl 1/1 Running 0 19h 146.0.237.30 node01
kube-system kube-ovn-controller-96f89c68b-pp75k 1/1 Running 0 20h 134.119.213.95 controller
kube-system kube-ovn-controller-96f89c68b-zzks7 1/1 Running 0 19h 146.0.237.30 node01
kube-system kube-proxy-tlgbm 1/1 Running 0 19h 146.0.237.30 node01
kube-system kube-proxy-w2zqp 1/1 Running 0 21h 134.119.213.95 controller
kube-system kube-scheduler-controller 1/1 Running 0 20h 134.119.213.95 controller
kube-system ovn-central-74986486f9-fvq5z 1/1 Running 0 20h 134.119.213.95 controller
kube-system ovs-ovn-2mm96 1/1 Running 10 20h 134.119.213.95 controller
kube-system ovs-ovn-hpmdd 1/1 Running 0 19h 146.0.237.30 node01
kubevirt virt-api-f94f8b959-6vr6m 1/1 Running 0 19h 10.16.0.28 node01
kubevirt virt-api-f94f8b959-z2j5d 1/1 Running 0 19h 10.16.0.27 node01
kubevirt virt-controller-64766f7cbf-58xmw 1/1 Running 0 19h 10.16.0.30 node01
kubevirt virt-controller-64766f7cbf-c8sfn 1/1 Running 0 19h 10.16.0.31 node01
kubevirt virt-handler-qr7qn 1/1 Running 0 19h 10.16.0.32 node01
kubevirt virt-operator-79c97797-8v7sj 1/1 Running 0 20h 10.16.0.7 node01
kubevirt virt-operator-79c97797-zwnfv 1/1 Running 0 20h 10.16.0.6 node01
openness docker-registry-deployment-54d5bb5c-672z2 1/1 Running 0 20h 134.119.213.95 controller
openness eaa-6f8b94c9d7-kxjlm 1/1 Running 0 20h 10.16.0.4 node01
openness edgedns-ll22s 1/1 Running 0 19h 10.16.0.21 node01
openness interfaceservice-xdbsz 1/1 Running 0 19h 10.16.0.19 node01
openness nfd-release-node-feature-discovery-master-cdbcfd997-lrppv 1/1 Running 0 20h 10.16.0.15 controller
openness nfd-release-node-feature-discovery-worker-5l92k 1/1 Running 0 19h 146.0.237.30 node01
openness syslog-master-dxct9 1/1 Running 0 20h 10.16.0.5 controller
openness syslog-ng-9svpj 1/1 Running 0 19h 10.16.0.22 node01
telemetry cadvisor-cx4z6 2/2 Running 0 19h 10.16.0.20 node01
telemetry collectd-nkj8x 2/2 Running 0 19h 146.0.237.30 node01
telemetry custom-metrics-apiserver-54699b845f-dbsws 1/1 Running 0 20h 10.16.0.13 controller
telemetry grafana-6b79c984b-88mpv 2/2 Running 0 20h 10.16.0.17 controller
telemetry otel-collector-7d5b75bbdf-6jkxb 2/2 Running 0 20h 10.16.0.11 node01
telemetry prometheus-node-exporter-92q8m 1/1 Running 0 19h 10.16.0.23 node01
telemetry prometheus-server-76c96b9497-xkhg6 3/3 Running 0 20h 10.16.0.10 controller
telemetry telemetry-aware-scheduling-68467c4ccd-bxltp 2/2 Running 0 20h 10.16.0.14 controller
telemetry telemetry-collector-certs-vcrqk 0/1 Completed 0 20h 10.16.0.12 node01
telemetry telemetry-node-certs-5xb8j 1/1 Running 0 19h 10.16.0.18 node01

Error while deploying Edge Node

Hi,
I have been able to deploy Edge Controller on a physical machine. While deploying the edge node using the Ansible script, I am coming across the following error. Both my edge controller and node are on separate physical servers. The error below was also seen on a VM setup. I have also manually tried the command that failed but it gives the same error.

TASK [kubernetes/cni/kubeovn/worker : try to get ovs-ovn execution logs] *************************************** ******************************************

task path: /root/openness-experience-kits/roles/kubernet es/cni/kubeovn/worker/tasks/main.yml:75
fatal: [node01 -> x.x.x.x]: FAILED! => {
"changed": false,
"cmd": "set -o pipefail && kubectl logs -n kube-system $(kubectl get pods -n kube-system -o custom-columns=NAME:.metadata.name --field-selector spec.nodeName=node01 | grep ovs-ovn)\n",
"delta": "0:00:00.266311",
"end": "2020-07-15 16:53:57.933688",
"rc": 1,
"start": "2020-07-15 16:53:57.667377"
}

STDERR:

Error from server: Get https://x.x.x.x:10250/containerLogs/kube-system/ovs-ovn-tgfq6/openvswitch: dial tcp x.x.x.x:10250: connect: connection refused

MSG:

non-zero return code
...ignoring

TASK [kubernetes/cni/kubeovn/worker : end the playbook] ******************************************************** ******************************************
task path: /root/openness-experience-kits/roles/kubernet es/cni/kubeovn/worker/tasks/main.yml:84
fatal: [node01]: FAILED! => {
"changed": false
}

MSG:

end the playbook: either ovs-ovn pod did not start or the socket was not created

Issue while running deploy_ne.sh

Hi @amr-mokhtar

  1. I had already ran deploy_ne.sh script and installed everything. Later tried to cleanup and re-deploy network edge.
  2. Im facing error while running "./deploy_ne.sh controller " script as mentioned below

fatal: [controller]: FAILED! => {
"changed": false
}

MSG:

Could not find the requested service kubelet: host


3.Kindly help me to resolve this.

Issues with ovs on dell servers

I was trying to install OpenNESS 20.06 which was released recently. I tried this on two different kind of servers. There are two setups, one is with server1(node) and other with server(2). In both cases the controller is a VM.
Now the issue is in server 1, the installation is successful with all the bridges getting created and are properly working.
But the second server(dell 640 power edge) is behaving a little strange. The installation is successful but the brodges are not getting created and are getting destroyed. The ovs bridge is not stable in this one. Most of the times it is unable to find the db.sock which hampers with the pods creation.
In the previos versions too we have faced similar problems.
The server details are as below
1st Node server details (pizza box server)
Manufacturer : Boston supermicro server
Product name : sys-6019p-mit
Bios revision: 5.14
Vendor: american megatrends inc
Os centos 7.6.1810
Hdd 256 gb
Ram 64 gb

2nd node server details(dell poweredge 640 series)
Dell server
Intel xeon gold 6226 processors
2.7 ghz 12 cores 24 threads
19.25m cache, 3.7 max turbo frequency
64 gb , 1tb hdd
Quad port 10G nics
Os centos7.6.1810

Could someone please help. If more details or logs are required let me know. Thanks

Issue with Telemetry cAdvisor and Collecd

We have deployed the release '20.06-ovn-fix' we are seeing issue with Telemetry
Prometheus Targets for cAdvisor and Collecd showing DOWN.
Telemetry_Issues

Following is the cluster information and log snippet. Let me know what other information required for debugging.

[root@openmaster cdn-transcode]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
openmaster   Ready    master   3h40m   v1.18.4
openworker   Ready    worker   154m    v1.18.4
[root@openmaster cdn-transcode]# kubectl get pods -n telemetry
NAME                                          READY   STATUS      RESTARTS   AGE
cadvisor-s9w77                                2/2     Running     0          135m
collectd-9g4g2                                2/2     Running     0          135m
custom-metrics-apiserver-54699b845f-n96sh     1/1     Running     0          3h8m
grafana-6b79c984b-47snl                       2/2     Running     0          174m
otel-collector-7d5b75bbdf-5t9hb               2/2     Running     0          3h8m
prometheus-node-exporter-j2pn7                1/1     Running     0          135m
prometheus-server-76c96b9497-f48gp            3/3     Running     0          3h9m
telemetry-aware-scheduling-68467c4ccd-s24bj   2/2     Running     0          176m
telemetry-collector-certs-8d6q6               0/1     Completed   0          3h8m
telemetry-node-certs-jf2ct                    1/1     Running     0          135m

kubectl logs -f -n telemetry cadvisor-s9w77 -c cadvisor

2020/09/15 16:00:00 http: superfluous response.WriteHeader call from github.com/prometheus/client_golang/prometheus/promhttp.httpError (http.go:306)
W0915 16:00:04.666679       1 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-1749.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-1749.scope: no such file or directory
W0915 16:00:04.666804       1 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-1749.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-1749.scope: no such file or directory
W0915 16:00:04.666859       1 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-1749.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-1749.scope: no such file or directory
2020/09/15 16:00:05 http: superfluous response.WriteHeader call from github.com/prometheus/client_golang/prometheus/promhttp.httpError (http.go:306)
2020/09/15 16:00:10 http: superfluous response.WriteHeader call from github.com/prometheus/client_golang/prometheus/promhttp.httpError (http.go:306)
2020/09/15 16:00:15 http: superfluous response.WriteHeader call from github.com/prometheus/client_golang/prometheus/promhttp.httpError (http.go:306)
2020/09/15 16:00:20 http: superfluous response.WriteHeader call from github.com/prometheus/client_golang/prometheus/promhttp.httpError (http.go:306)

kubectl logs -f -n telemetry cadvisor-s9w77 -c cadvisor-proxy

10.16.0.11 - - [15/Sep/2020:16:04:05 +0000] "GET /metrics HTTP/1.1" 200 720701 "-" "Prometheus/2.16.0"
10.16.0.11 - - [15/Sep/2020:16:04:10 +0000] "GET /metrics HTTP/1.1" 200 245565 "-" "Prometheus/2.16.0"
10.16.0.11 - - [15/Sep/2020:16:04:15 +0000] "GET /metrics HTTP/1.1" 200 393021 "-" "Prometheus/2.16.0"
10.16.0.11 - - [15/Sep/2020:16:04:20 +0000] "GET /metrics HTTP/1.1" 200 491325 "-" "Prometheus/2.16.0"
10.16.0.11 - - [15/Sep/2020:16:04:25 +0000] "GET /metrics HTTP/1.1" 200 311101 "-" "Prometheus/2.16.0"
10.16.0.11 - - [15/Sep/2020:16:04:30 +0000] "GET /metrics HTTP/1.1" 200 458557 "-" "Prometheus/2.16.0"
10.16.0.11 - - [15/Sep/2020:16:04:35 +0000] "GET /metrics HTTP/1.1" 499 0 "-" "Prometheus/2.16.0"
10.16.0.11 - - [15/Sep/2020:16:04:40 +0000] "GET /metrics HTTP/1.1" 200 507709 "-" "Prometheus/2.16.0"
10.16.0.11 - - [15/Sep/2020:16:04:45 +0000] "GET /metrics HTTP/1.1" 499 0 "-" "Prometheus/2.16.0"
10.16.0.11 - - [15/Sep/2020:16:04:50 +0000] "GET /metrics HTTP/1.1" 200 327485 "-" "Prometheus/2.16.0"
10.16.0.11 - - [15/Sep/2020:16:04:55 +0000] "GET /metrics HTTP/1.1" 499 0 "-" "Prometheus/2.16.0"

kubectl logs -f -n telemetry collectd-9g4g2 collectd-proxy

10.16.0.11 - - [15/Sep/2020:16:03:41 +0000] "GET /metrics HTTP/1.1" 502 157 "-" "Prometheus/2.16.0"
10.16.0.11 - - [15/Sep/2020:16:03:46 +0000] "GET /metrics HTTP/1.1" 502 157 "-" "Prometheus/2.16.0"
2020/09/15 16:03:46 [error] 29#29: *1516 connect() failed (111: Connection refused) while connecting to upstream, client: 10.16.0.11, server: collectd, request: "GET /metrics HTTP/1.1", upstream: "http://[::1]:9104/metrics", host: "192.168.0.4:9103"
2020/09/15 16:03:51 [error] 29#29: *1516 connect() failed (111: Connection refused) while connecting to upstream, client: 10.16.0.11, server: collectd, request: "GET /metrics HTTP/1.1", upstream: "http://[::1]:9104/metrics", host: "192.168.0.4:9103"
10.16.0.11 - - [15/Sep/2020:16:03:51 +0000] "GET /metrics HTTP/1.1" 502 157 "-" "Prometheus/2.16.0"
2020/09/15 16:03:56 [error] 29#29: *1516 connect() failed (111: Connection refused) while connecting to upstream, client: 10.16.0.11, server: collectd, request: "GET /metrics HTTP/1.1", upstream: "http://[::1]:9104/metrics", host: "192.168.0.4:9103"
10.16.0.11 - - [15/Sep/2020:16:03:56 +0000] "GET /metrics HTTP/1.1" 502 157 "-" "Prometheus/2.16.0"
2020/09/15 16:04:01 [error] 29#29: *1516 connect() failed (111: Connection refused) while connecting to upstream, client: 10.16.0.11, server: collectd, request: "GET /metrics HTTP/1.1", upstream: "http://[::1]:9104/metrics", host: "192.168.0.4:9103"
10.16.0.11 - - [15/Sep/2020:16:04:01 +0000] "GET /metrics HTTP/1.1" 502 157 "-" "Prometheus/2.16.0"

Issue installing OpenNESS onPremise

Starting with a centos 7 minimal install (on controller and edge node hosts),
I have cloned the experience kit.
git clone https://github.com/open-ness/openness-experience-kits.git
I modified group_vars/all.yml with my git token
I edited inventory.ini to match my configurations (controller and one node)
I've setup passwordless ssh and hostnames

I've run the deploy_onprem_controller.sh script

It fails as follows:

roles/openness/onprem/master/tasks/build.yml has an invalid path to git_repo (occurs twice)
fatal: [mec-controller]: FAILED! => {"reason": "Unable to retrieve file contents\nCould not find or access '/git_repo/tasks/gitconfig_bootstrap.yml'"}
fatal: [mec-controller]: FAILED! => {"reason": "Unable to retrieve file contents\nCould not find or access '/git_repo/tasks/gitconfig_remove.yml'"}

this appears to be a misconfiguration of the relative path of the git_repo directory (in both instances)
modifying to use ../../../../git_repo instead of ../../../git_repo (which is an invalid path) appears to fix it.
Perhaps this is misconfigured in build.yml?

Issue while running deploy_ne.sh nodes

Hi @amr-mokhtar ,

  1. While running deply_ne.sh nodes, I am getting following error.
    It is showing that edge-node not found.

TASK [kubernetes/worker : check if already in cluster] *******************************************************************************
task path: /home/calsoft/openness/new/openness-experience-kits/roles/kubernetes/worker/tasks/main.yml:19
fatal: [node01 -> 172.42.42.7]: FAILED! => {
"changed": false,
"cmd": [
"kubectl",
"get",
"node",
"edge-node"
],
"delta": "0:00:00.088958",
"end": "2020-06-23 10:14:00.923573",
"rc": 1,
"start": "2020-06-23 10:14:00.834615"
}

STDERR:

Error from server (NotFound): nodes "edge-node" not found

MSG:

non-zero return code
...ignoring

After ignoring this

TASK [kubernetes/worker : join the cluster] ******************************************************************************************
task path: /home/calsoft/openness/new/openness-experience-kits/roles/kubernetes/worker/tasks/main.yml:39
fatal: [node01]: FAILED! => {
"changed": true,
"cmd": [
"kubeadm",
"join",
"10.0.3.15:6443",
"--token",
"gvptmw.mhrgt6dyidi5lxzt",
"--discovery-token-ca-cert-hash",
"sha256:82b37225e7ce4ce11256189976bdb5c5abf68e5df6bacd855df7ed6c1180b62b",
"--v=2"
],
"delta": "0:05:00.412689",
"end": "2020-06-23 10:19:02.444221",
"rc": 1,
"start": "2020-06-23 10:14:02.031532"
}

STDOUT:

[preflight] Running pre-flight checks

STDERR:

W0623 10:14:02.067315 24075 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.

Stuck while deploying controller

Hi,

Am trying to deploy openness 20.06 controller by running below command.
./deploy_ne.sh controller

But its got stuck in "ntp time update " state.
issue_ntp

Did configure time in controller group_vars/all/10-default.yml
ntp-conf

Can you please help me to check how to resolve this?

Thanks & Regard,
Devika

Unable to start edge nodes (e.g. after a reboot)

I can't seem to find instructions on how to start edge nodes if they are not running.
I have deployed controller and edge nodes via the onpremise deployment scripts.
I have managed to enroll a number of nodes with the controller (to the point where I can edit the network interfaces.
However if the edge node services stop running for some reason (including a reboot), any attempt to list the interfaces results in a 500 error from the GUI - but doesn't give any clue as to how address the issue.
If I look at the edge node I can see that no docker services are running, I cannot find any documentation or scripts to get it running.
For the controller at least, I see I can use make all-up from the /opt/openness directory.

I'm not sure how to go about debugging/resolving this issue unfortunately.

Single-node Network Edge cluster on VM - installation error

hi ,
I am using the latest openNESS20.06.
trying to have a Single-node Network Edge cluster on a VM machine(KVM based).
Every time my installation getting stuck at

TASK [kubernetes/cni/kubeovn/master : create temp crd_local.yml] *******************************************************************************************************************************
task path: /home/kit/openness-experience-kits/roles/kubernetes/cni/kubeovn/master/tasks/main.yml:132
fatal: [nodectrl]: FAILED! => {
    "changed": false,
    "checksum": "b52d0515bb11a9a5d1cbe2b2ca2e2ade0287c80b"
}

MSG:

Destination directory /opt/edgenode/network-edge/kube-ovn does not exist

PLAY RECAP *************************************************************************************************************************************************************************************
nodectrl                   : ok=239  changed=67   unreachable=0    failed=1    skipped=107  rescued=0    ignored=4

image

for the pod in pending status

image

OVS DPDK related values are
image

ADDED info
logs from /car/log/openswitch/ovn-controller.log

image

any inputs to remove this installation error?
thanks

Working setups on 19.12 must update checkout tags

A problem was observed when having a working setup on 19.12, and trying to re-deploy, Openness Experience Kits (OEK) will check out 20.03 tag which does not match the original configuration causing the setup to break all of sudden.

This is happening because by default OEK always checks out master for edgecontroller & edgenode.

This has been fix as of 20.03, and all future setups will not get broken with newer releases. For existing 19.12 setups, the below change must be performed to the files:

  1. openness-experience-kits/group_vars/controller_group.yml
  2. openness-experience-kits/group_vars/edgenode_group.yml

Change:

- git_repo_branch: master
+ git_repo_branch: openness-19.12.01

NOTE: Apply this change before running deploy_*.sh scripts.

Issue installing OpenNESS onPremise node - copy_configs_to_appliance.yml

Context:
Starting with a centos 7 minimal install (on controller and edge node hosts),
I have cloned the experience kit.
git clone https://github.com/open-ness/openness-experience-kits.git
I modified group_vars/all.yml with my git token
I edited inventory.ini to match my configurations (controller and one node)
I've setup passwordless ssh and hostnames

I've run the deploy_onprem_controller.sh script
After some workarounds I've been able to deploy the controller.

When I try and deploy a node I get an error:
File: roles/openness/onprem/worker/tasks/subtasks/copy_configs_to_appliance.yml

TASK [openness/onprem/worker : copy files to /var/lib/appliance] ***************************************************************************************************
task path: /home/idirect/openness-experience-kits/roles/openness/onprem/worker/tasks/subtasks/copy_configs_to_appliance.yml:6
fatal: [mec-node]: FAILED! => {"changed": false, "msg": "Remote copy does not support recursive copy of directory: /opt/edgenode/configs"}

(Note that I had also previously commented #role: os_kernelrt in onprem_node.yml)
@idirect-dev, and commented #disable_includes roles/dpdk/tasks/main.yml)

Unable to build TAS when doing a single node minimal deployment

I am using 1b297fb to deploy a single node network edge, using the command ./deploy_ne.sh -f minimal single on a CentOS 7 machine.

After a few trivial issues (which were solved), I am stuck at the task Telemetry Aware Scheduling.

Upon running the task manually, I got the following error.

[root@t-reza tas-repo]# source /etc/profile && make build
CGO_ENABLED=0 GO111MODULE=on go build -ldflags="-s -w" -o ./bin/controller ./cmd/tas-policy-controller
go: finding modernc.org/cc v1.0.0
go: finding modernc.org/mathutil v1.0.0
go: finding modernc.org/xc v1.0.0
go: finding modernc.org/golex v1.0.0
go: finding modernc.org/strutil v1.0.0
go: modernc.org/[email protected]: git fetch -f https://gitlab.com/cznic/golex refs/heads/*:refs/heads/* refs/tags/*:refs/tags/* in /root/go/pkg/mod/cache/vcs/9aae2d4c6ee72eb1c6b65f7a51a0482327c927783dea53d4058803094c9d8039: exit status 128:
	error: RPC failed; result=22, HTTP code = 404
	fatal: The remote end hung up unexpectedly
go: modernc.org/[email protected]: git fetch -f https://gitlab.com/cznic/mathutil refs/heads/*:refs/heads/* refs/tags/*:refs/tags/* in /root/go/pkg/mod/cache/vcs/fb72eb2422fda47ac75ca695d44b06b82f3df3c5308e271486fca5e320879130: exit status 128:
	error: RPC failed; result=22, HTTP code = 404
	fatal: The remote end hung up unexpectedly
go: modernc.org/[email protected]: git fetch -f https://gitlab.com/cznic/cc refs/heads/*:refs/heads/* refs/tags/*:refs/tags/* in /root/go/pkg/mod/cache/vcs/3dac616a9d80602010c4792ef9c0e9d9812a1be8e70453e437e9792978075db6: exit status 128:
	error: RPC failed; result=22, HTTP code = 404
	fatal: The remote end hung up unexpectedly
go: modernc.org/[email protected]: git fetch -f https://gitlab.com/cznic/strutil refs/heads/*:refs/heads/* refs/tags/*:refs/tags/* in /root/go/pkg/mod/cache/vcs/f48599000415ab70c2f95dc7528c585820ed37ee15d27040a550487e83a41748: exit status 128:
	error: RPC failed; result=22, HTTP code = 404
	fatal: The remote end hung up unexpectedly
go: modernc.org/[email protected]: git fetch -f https://gitlab.com/cznic/xc refs/heads/*:refs/heads/* refs/tags/*:refs/tags/* in /root/go/pkg/mod/cache/vcs/29fc2f846f24ce3630fdd4abfc664927c4ad22f98a3589050facafa0991faada: exit status 128:
	error: RPC failed; result=22, HTTP code = 404
	fatal: The remote end hung up unexpectedly
go: error loading module requirements
make: *** [build] Error 1

People suggested that I change the https based git protocol to ssh based. Upon running the command git config --global url.ssh://[email protected]/.insteadOf https://gitlab.com/, it was able to connect to the remote and fetch the code, but now it is not able to find the revision v1.0.0. Logs:

[root@t-reza tas-repo]# source /etc/profile && make build
CGO_ENABLED=0 GO111MODULE=on go build -ldflags="-s -w" -o ./bin/controller ./cmd/tas-policy-controller
go: finding modernc.org/strutil v1.0.0
go: finding modernc.org/xc v1.0.0
go: finding modernc.org/golex v1.0.0
go: finding modernc.org/mathutil v1.0.0
go: finding modernc.org/cc v1.0.0
go: modernc.org/[email protected]: unknown revision v1.0.0
go: modernc.org/[email protected]: unknown revision v1.0.0
go: modernc.org/[email protected]: unknown revision v1.0.0
go: modernc.org/[email protected]: unknown revision v1.0.0
go: modernc.org/[email protected]: unknown revision v1.0.0
go: error loading module requirements
make: *** [build] Error 1

Question: Does this still work and I am doing something wrong, or does this need to be fixed?

Please guide if I am doing something wrong, or whether something needs to be patched.

Ref: https://github.com/open-ness/openness-experience-kits/blob/1b297fbb2a3e35ffdcd2182d80f839ff1789f4c2/roles/telemetry/tas/tasks/main.yml#L155

while deploying openness 20.06 ovs-ovn/ovn-central pods are not running

Hi,

Can anyone please help me how to solve below issue? ovs-ovn/ovn-central pods are not running

group_vars/all/10-default.yml
image


TASK [kubernetes/cni/kubeovn/master : wait for running ovs-ovn & ovn-central pods] *******************************************************************
task path: /home/sysadmin/Downloads/openness-experience-kits-master/roles/kubernetes/cni/kubeovn/master/tasks/main.yml:149
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (30 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (29 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (28 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (27 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (26 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (25 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (24 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (23 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (22 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (21 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (20 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (19 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (18 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (17 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (16 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (15 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (14 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (13 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (12 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (11 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (10 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (9 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (8 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (7 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (6 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (5 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (4 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (3 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (2 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (1 retries left).
fatal: [controller]: FAILED! => {
"attempts": 30,
"changed": false,
"cmd": "set -o pipefail && kubectl get pods -n kube-system -o custom-columns=NAME:.metadata.name,STATUS:.status.phase --no-headers --field-selector spec.nodeName=controller | grep -E "ovs-ovn|ovn-central"\n",
"delta": "0:00:00.071730",
"end": "2020-09-07 17:51:59.310148",
"rc": 0,
"start": "2020-09-07 17:51:59.238418"
}

STDOUT:

ovn-central-74986486f9-5vc4t Pending
ovs-ovn-h7r99 Running

TASK [kubernetes/cni/kubeovn/master : events of ovs-ovn & ovn-central pods] **************************************************************************
task path: /home/sysadmin/Downloads/openness-experience-kits-master/roles/kubernetes/cni/kubeovn/master/tasks/main.yml:169
ok: [controller] => (item=ovs-ovn) => {
"ansible_loop_var": "item",
"changed": false,
"cmd": "set -o pipefail && kubectl describe pod -n kube-system $(kubectl get pods -n kube-system -o custom-columns=NAME:.metadata.name | grep ovs-ovn) | sed -n '/^Events:/,//p'\n",
"delta": "0:00:00.163775",
"end": "2020-09-07 17:51:59.641209",
"item": "ovs-ovn",
"rc": 0,
"start": "2020-09-07 17:51:59.477434"
}

STDOUT:

Events:
Type Reason Age From Message


Normal Scheduled 88m default-scheduler Successfully assigned kube-system/ovs-ovn-h7r99 to controller
Normal Pulled 88m kubelet, controller Container image "ovs-dpdk" already present on machine
Normal Created 88m kubelet, controller Created container openvswitch
Normal Started 88m kubelet, controller Started container openvswitch
Warning Unhealthy 87m (x4 over 87m) kubelet, controller Liveness probe failed: ovsdb-server is not running
ovs-vswitchd is not running
Normal Killing 87m kubelet, controller Container openvswitch failed liveness probe, will be restarted
Warning Unhealthy 53m (x204 over 88m) kubelet, controller Readiness probe failed: ovsdb-server is not running
ovs-vswitchd is not running
Warning BackOff 48m (x99 over 78m) kubelet, controller Back-off restarting failed container
Warning FailedMount 45m kubelet, controller MountVolume.SetUp failed for volume "hugepage" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.ioempty-dir/hugepage --scope -- mount -t hugetlbfs -o pagesize=2Mi nodev /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.ioempty-dir/hugepage
Output: Running scope as unit run-16622.scope.
mount: wrong fs type, bad option, bad superblock on nodev,
missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

Warning FailedMount 45m kubelet, controller MountVolume.SetUp failed for volume "hugepage" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.ioempty-dir/hugepage --scope -- mount -t hugetlbfs -o pagesize=2Mi nodev /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.ioempty-dir/hugepage
Output: Running scope as unit run-16674.scope.
mount: wrong fs type, bad option, bad superblock on nodev,
missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

Warning FailedMount 45m kubelet, controller MountVolume.SetUp failed for volume "hugepage" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.ioempty-dir/hugepage --scope -- mount -t hugetlbfs -o pagesize=2Mi nodev /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.ioempty-dir/hugepage
Output: Running scope as unit run-16764.scope.
mount: wrong fs type, bad option, bad superblock on nodev,
missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

Warning FailedMount 45m kubelet, controller MountVolume.SetUp failed for volume "hugepage" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.ioempty-dir/hugepage --scope -- mount -t hugetlbfs -o pagesize=2Mi nodev /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.ioempty-dir/hugepage
Output: Running scope as unit run-16905.scope.
mount: wrong fs type, bad option, bad superblock on nodev,
missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

Warning FailedMount 45m kubelet, controller MountVolume.SetUp failed for volume "hugepage" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.ioempty-dir/hugepage --scope -- mount -t hugetlbfs -o pagesize=2Mi nodev /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.ioempty-dir/hugepage
Output: Running scope as unit run-16983.scope.
mount: wrong fs type, bad option, bad superblock on nodev,
missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

Warning FailedMount 45m kubelet, controller MountVolume.SetUp failed for volume "hugepage" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.ioempty-dir/hugepage --scope -- mount -t hugetlbfs -o pagesize=2Mi nodev /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.ioempty-dir/hugepage
Output: Running scope as unit run-17107.scope.
mount: wrong fs type, bad option, bad superblock on nodev,
missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

Warning FailedMount 45m kubelet, controller MountVolume.SetUp failed for volume "hugepage" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.ioempty-dir/hugepage --scope -- mount -t hugetlbfs -o pagesize=2Mi nodev /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.ioempty-dir/hugepage
Output: Running scope as unit run-17345.scope.
mount: wrong fs type, bad option, bad superblock on nodev,
missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

Warning FailedMount 44m kubelet, controller MountVolume.SetUp failed for volume "hugepage" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.ioempty-dir/hugepage --scope -- mount -t hugetlbfs -o pagesize=2Mi nodev /var/lib/kubelet/pods/a3c95800-5f5d-4678-94b3-7c7f87a6d9db/volumes/kubernetes.ioempty-dir/hugepage
Output: Running scope as unit run-17745.scope.
mount: wrong fs type, bad option, bad superblock on nodev,
missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

Warning FailedMount 43m kubelet, controller Unable to attach or mount volumes: unmounted volumes=[hugepage], unattached volumes=[dev ovn-token-jf69d host-modules host-run-ovs host-sys host-config-openvswitch host-log hugepage]: timed out waiting for the condition
Warning FailedMount 29m (x15 over 43m) kubelet, controller (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[hugepage], unattached volumes=[hugepage dev ovn-token-jf69d host-modules host-run-ovs host-sys host-config-openvswitch host-log]: timed out waiting for the condition
Normal SandboxChanged 25m kubelet, controller Pod sandbox changed, it will be killed and re-created.
Normal Pulled 25m (x2 over 25m) kubelet, controller Container image "ovs-dpdk" already present on machine
Normal Created 25m (x2 over 25m) kubelet, controller Created container openvswitch
Normal Started 25m (x2 over 25m) kubelet, controller Started container openvswitch
Warning Unhealthy 24m kubelet, controller Liveness probe failed: ovsdb-server is not running
ovs-vswitchd is not running
Warning BackOff 5m16s (x54 over 25m) kubelet, controller Back-off restarting failed container
Warning Unhealthy 22s (x138 over 25m) kubelet, controller Readiness probe failed: ovsdb-server is not running
ovs-vswitchd is not running
ok: [controller] => (item=ovn-central) => {
"ansible_loop_var": "item",
"changed": false,
"cmd": "set -o pipefail && kubectl describe pod -n kube-system $(kubectl get pods -n kube-system -o custom-columns=NAME:.metadata.name | grep ovn-central) | sed -n '/^Events:/,//p'\n",
"delta": "0:00:00.150694",
"end": "2020-09-07 17:51:59.921285",
"item": "ovn-central",
"rc": 0,
"start": "2020-09-07 17:51:59.770591"
}

STDOUT:

Events:
Type Reason Age From Message


Normal Scheduled 88m default-scheduler Successfully assigned kube-system/ovn-central-74986486f9-5vc4t to controller
Warning Failed 88m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = error pulling image configuration: Get https://alauda-cn-registry-v2.s3.cn-north-1.amazonaws.com.cn/registry/docker/registry/v2/blobs/sha256/c3/c3f28efb699d33d4dcc77cca1e4a48485603a8c90919ea823a441c038438317d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAO3XAUUL6OZF662RQ%2F20200907%2Fcn-north-1%2Fs3%2Faws4_request&X-Amz-Date=20200907T105335Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=611876d46e3fa7ed410a9d112f63397cec758c86c1655416f6eeb6c8770dfada: net/http: TLS handshake timeout
Warning Failed 87m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = error pulling image configuration: Get https://alauda-cn-registry-v2.s3.cn-north-1.amazonaws.com.cn/registry/docker/registry/v2/blobs/sha256/c3/c3f28efb699d33d4dcc77cca1e4a48485603a8c90919ea823a441c038438317d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAO3XAUUL6OZF662RQ%2F20200907%2Fcn-north-1%2Fs3%2Faws4_request&X-Amz-Date=20200907T105425Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=e852a763ff8ac84433f858fd7aecfbf83a3aed47a59468641ad7449eb7567c74: net/http: TLS handshake timeout
Warning Failed 79m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = net/http: TLS handshake timeout
Warning Failed 78m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = error pulling image configuration: Get https://alauda-cn-registry-v2.s3.cn-north-1.amazonaws.com.cn/registry/docker/registry/v2/blobs/sha256/c3/c3f28efb699d33d4dcc77cca1e4a48485603a8c90919ea823a441c038438317d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAO3XAUUL6OZF662RQ%2F20200907%2Fcn-north-1%2Fs3%2Faws4_request&X-Amz-Date=20200907T110304Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=a5c6c1f58e35d61854aed19af2e1820b14b8ba5a7ffb33fc59c8c439696ccc42: net/http: TLS handshake timeout
Normal BackOff 68m (x43 over 88m) kubelet, controller Back-off pulling image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0"
Warning Failed 63m (x51 over 88m) kubelet, controller Error: ImagePullBackOff
Warning Failed 53m (x8 over 88m) kubelet, controller Error: ErrImagePull
Normal Pulling 53m (x9 over 88m) kubelet, controller Pulling image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0"
Warning Failed 44m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = error pulling image configuration: Get https://alauda-cn-registry-v2.s3.cn-north-1.amazonaws.com.cn/registry/docker/registry/v2/blobs/sha256/c3/c3f28efb699d33d4dcc77cca1e4a48485603a8c90919ea823a441c038438317d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAO3XAUUL6OZF662RQ%2F20200907%2Fcn-north-1%2Fs3%2Faws4_request&X-Amz-Date=20200907T113704Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=916691b8ea24e00bf8666c550cc3d157b6d2ba694890f5a17782916ce9679916: net/http: TLS handshake timeout
Warning Failed 39m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = net/http: TLS handshake timeout
Warning Failed 39m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = error pulling image configuration: Get https://alauda-cn-registry-v2.s3.cn-north-1.amazonaws.com.cn/registry/docker/registry/v2/blobs/sha256/c3/c3f28efb699d33d4dcc77cca1e4a48485603a8c90919ea823a441c038438317d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAO3XAUUL6OZF662RQ%2F20200907%2Fcn-north-1%2Fs3%2Faws4_request&X-Amz-Date=20200907T114242Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=22320aef375ff472ef4110b3e4887173db5c03fcea34ff253c3f6b1c6f49b2d2: net/http: TLS handshake timeout
Normal Pulling 38m (x4 over 45m) kubelet, controller Pulling image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0"
Warning Failed 35m (x4 over 44m) kubelet, controller Error: ErrImagePull
Warning Failed 35m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = error pulling image configuration: Get https://alauda-cn-registry-v2.s3.cn-north-1.amazonaws.com.cn/registry/docker/registry/v2/blobs/sha256/c3/c3f28efb699d33d4dcc77cca1e4a48485603a8c90919ea823a441c038438317d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAO3XAUUL6OZF662RQ%2F20200907%2Fcn-north-1%2Fs3%2Faws4_request&X-Amz-Date=20200907T114551Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=4cfef79d86a0b997447a4a66221618abd1970b6bfa553ac2460fe81d6a0fa6de: net/http: TLS handshake timeout
Warning Failed 35m (x7 over 44m) kubelet, controller Error: ImagePullBackOff
Normal BackOff 35m (x8 over 44m) kubelet, controller Back-off pulling image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0"
Warning Failed 29m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = error pulling image configuration: Get https://alauda-cn-registry-v2.s3.cn-north-1.amazonaws.com.cn/registry/docker/registry/v2/blobs/sha256/c3/c3f28efb699d33d4dcc77cca1e4a48485603a8c90919ea823a441c038438317d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAO3XAUUL6OZF662RQ%2F20200907%2Fcn-north-1%2Fs3%2Faws4_request&X-Amz-Date=20200907T115241Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=9e128325134b83958b15251e0dd072492dc0bf0d22e92455b6d6dd739fd74248: net/http: TLS handshake timeout
Warning Failed 25m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = error pulling image configuration: Get https://alauda-cn-registry-v2.s3.cn-north-1.amazonaws.com.cn/registry/docker/registry/v2/blobs/sha256/c3/c3f28efb699d33d4dcc77cca1e4a48485603a8c90919ea823a441c038438317d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAO3XAUUL6OZF662RQ%2F20200907%2Fcn-north-1%2Fs3%2Faws4_request&X-Amz-Date=20200907T115644Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=81deb3cb59d53ab95bd7175ad7b2139a6540ec3f804ed61f944bf88de2fdf726: net/http: TLS handshake timeout
Warning Failed 24m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = error pulling image configuration: Get https://alauda-cn-registry-v2.s3.cn-north-1.amazonaws.com.cn/registry/docker/registry/v2/blobs/sha256/c3/c3f28efb699d33d4dcc77cca1e4a48485603a8c90919ea823a441c038438317d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAO3XAUUL6OZF662RQ%2F20200907%2Fcn-north-1%2Fs3%2Faws4_request&X-Amz-Date=20200907T115734Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=e98c507daea0f7532ef1fbd9eb321b1157f8b3d58bc2a0cfe1154255d7784225: net/http: TLS handshake timeout
Warning Failed 23m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = error pulling image configuration: Get https://alauda-cn-registry-v2.s3.cn-north-1.amazonaws.com.cn/registry/docker/registry/v2/blobs/sha256/c3/c3f28efb699d33d4dcc77cca1e4a48485603a8c90919ea823a441c038438317d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAO3XAUUL6OZF662RQ%2F20200907%2Fcn-north-1%2Fs3%2Faws4_request&X-Amz-Date=20200907T115826Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=8ee7b7654dcd3033deecd399056cf83584cc8a12d9189ac6ecb6a8f2caa0aba0: net/http: TLS handshake timeout
Normal Pulling 22m (x4 over 25m) kubelet, controller Pulling image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0"
Warning Failed 22m (x4 over 25m) kubelet, controller Error: ErrImagePull
Warning Failed 22m kubelet, controller Failed to pull image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0": rpc error: code = Unknown desc = error pulling image configuration: Get https://alauda-cn-registry-v2.s3.cn-north-1.amazonaws.com.cn/registry/docker/registry/v2/blobs/sha256/c3/c3f28efb699d33d4dcc77cca1e4a48485603a8c90919ea823a441c038438317d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAO3XAUUL6OZF662RQ%2F20200907%2Fcn-north-1%2Fs3%2Faws4_request&X-Amz-Date=20200907T115935Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=39c7aca8cc00c3f36f9f2389fa315dc14eba9e26813f771025c40a5c49c5f8b4: net/http: TLS handshake timeout
Normal BackOff 5m15s (x74 over 25m) kubelet, controller Back-off pulling image "index.alauda.cn/alaudak8s/kube-ovn-db:v1.0.0"
Warning Failed 10s (x95 over 25m) kubelet, controller Error: ImagePullBackOff

TASK [kubernetes/cni/kubeovn/master : try to get ovs-ovn execution logs] *****************************************************************************
task path: /home/sysadmin/Downloads/openness-experience-kits-master/roles/kubernetes/cni/kubeovn/master/tasks/main.yml:179
ok: [controller] => (item=ovs-ovn) => {
"ansible_loop_var": "item",
"changed": false,
"cmd": "set -o pipefail && kubectl logs -n kube-system $(kubectl get pods -n kube-system -o custom-columns=NAME:.metadata.name | grep ovs-ovn)\n",
"delta": "0:00:00.145724",
"end": "2020-09-07 17:52:00.233999",
"item": "ovs-ovn",
"rc": 0,
"start": "2020-09-07 17:52:00.088275"
}

STDOUT:

sleep 10 seconds, waiting for ovn-sb 10.102.126.171:6642 ready
sleep 10 seconds, waiting for ovn-sb 10.102.126.171:6642 ready
sleep 10 seconds, waiting for ovn-sb 10.102.126.171:6642 ready
sleep 10 seconds, waiting for ovn-sb 10.102.126.171:6642 ready
failed: [controller] (item=ovn-central) => {
"ansible_loop_var": "item",
"changed": false,
"cmd": "set -o pipefail && kubectl logs -n kube-system $(kubectl get pods -n kube-system -o custom-columns=NAME:.metadata.name | grep ovn-central)\n",
"delta": "0:00:00.133634",
"end": "2020-09-07 17:52:00.494118",
"item": "ovn-central",
"rc": 1,
"start": "2020-09-07 17:52:00.360484"
}

STDERR:

Error from server (BadRequest): container "ovn-central" in pod "ovn-central-74986486f9-5vc4t" is waiting to start: trying and failing to pull image

MSG:

non-zero return code
...ignoring

TASK [kubernetes/cni/kubeovn/master : end the playbook] **********************************************************************************************
task path: /home/sysadmin/Downloads/openness-experience-kits-master/roles/kubernetes/cni/kubeovn/master/tasks/main.yml:188
fatal: [controller]: FAILED! => {
"changed": false
}

MSG:

end the playbook: either ovs-ovn or ovn-central pod did not start or the socket was not created

PLAY RECAP *******************************************************************************************************************************************
controller : ok=212 changed=71 unreachable=0 failed=1 skipped=107 rescued=1 ignored=5

[root@controller openness-experience-kits-master]#


Thanks & Regards,
Devika

Errors in deploying Intel OpenNESS platform on VMs

Hi,
I have created a controller and edge node setup using 2 VMs. I am unable to get a proper running setup due to the following issues seen:

1. CrashLoopBackOff errors: Following pods are always failing.

[root@controller01 ~]# kubectl get pod -A -o wide| grep Crash

cdi cdi-operator-76b6694845-hvcvw 0/1 CrashLoopBackOff 23 9h 10.16.0.4 node01
kubernetes-dashboard kubernetes-dashboard-7bfbb48676-6g7l4 0/1 CrashLoopBackOff 15 57m 10.16.0.8 node01
kubevirt virt-operator-79c97797-qzctm 0/1 CrashLoopBackOff 11 23m 10.16.0.17 node01

2. Some warning/error msgs I see are:
Warning BackOff 14m (x31 over 23m) kubelet, node01 Back-off restarting failed container
Warning Unhealthy 9m22s (x20 over 24m) kubelet, node01 Readiness probe failed: Get https://10.16.0.17:8443/metrics: dial tcp 10.16.0.17:8443: connect: connection refused
Warning FailedMount 5m17s (x2 over 5m19s) kubelet, node01 MountVolume.SetUp failed for volume "kubevirt-operator-token-9f87j" : failed to sync secret cache: timed out waiting for the condition

Warning FailedMount 12m (x2 over 12m) kubelet, node01 MountVolume.SetUp failed for volume "kubernetes-dashboard-certs" : failed to sync secret cache: timed out waiting for the condition
Warning FailedMount 12m (x2 over 12m) kubelet, node01 MountVolume.SetUp failed for volume "kubernetes-dashboard-token-2jj6c" : failed to sync secret cache: timed out waiting for the condition

I am interested in finally deploying VMs with OpenNESS but presently can't even deploy a SampleApp. Certainly, would appreciate if any pointers can be provided.

Thank you.
Pavan

Issue while deploying openness worker-node

Hi,

Have ran ./deploy_ne.sh nodes. But failed with below error.
Worker-node is pingable and able do ssh as well.

connection_refused

connection_refused_0

Can you please help me how to resolve this issue?

Thanks & Regards,
Devika

Issue installing OpenNESS onPremise node

Context:
Starting with a centos 7 minimal install (on controller and edge node hosts),
I have cloned the experience kit.
git clone https://github.com/open-ness/openness-experience-kits.git
I modified group_vars/all.yml with my git token
I edited inventory.ini to match my configurations (controller and one node)
I've setup passwordless ssh and hostnames

I've run the deploy_onprem_controller.sh script
After some workarounds I've been able to deploy the controller.

When I try and deploy a node I get an error:
File: roles/dpdk/tasks/main.yml

roles/dpdk/tasks/main.yml caused an error with:
fatal: [mec-node]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (yum) module: disable_excludes Supported parameters include: allow_downgrade,conf_file,disable_gpg_check,disablerepo,enablerepo,exclude,install_repoquery,installroot,list,name,security,skip_broken,state,update_cache,validate_certs"}

The line which seems to be a problem is:
disable_excludes: all

(Note that I had also previously commented #role: os_kernelrt in onprem_node.yml)

Error while deploying Single-node Network Edge cluster

Hello,

I am trying to deploy a single-node Network Edge cluster (using command ./deploy_ne.sh -f minimal single). It's failing while waiting for Grafana pod.

TASK [telemetry/grafana : wait for Grafana pod to be ready] ****************************************************************************
task path: /home/centos/openness-experience-kits/roles/telemetry/grafana/tasks/main.yml:205
fatal: [controller]: FAILED! => {
    "changed": false,
    "cmd": [
        "kubectl",
        "wait",
        "--selector=app.kubernetes.io/instance=grafana",
        "--namespace=telemetry",
        "--for=condition=Ready",
        "pods",
        "--timeout=600s"
    ],
    "delta": "0:10:00.283865",
    "end": "2020-07-22 14:22:32.815846",
    "rc": 1,
    "start": "2020-07-22 14:12:32.531981"
}

STDERR:
error: timed out waiting for the condition on pods/grafana-6b79c984b-tmqk4

MSG:
non-zero return code

Following pods are in error states.


...
openness      eaa-6f8b94c9d7-jcp9v                                        0/1     ErrImageNeverPull        0          50m
openness      edgedns-kl4qh                                               0/1     ErrImageNeverPull        0          50m
openness      interfaceservice-zcgbb                                      0/1     ErrImageNeverPull        0          45m
..
telemetry     grafana-6b79c984b-tmqk4                                     0/2     Init:RunContainerError   0          35m
...

For Graphana pod I see following error:


Normal   Created     30m (x2 over 32m)    kubelet, openness  Created container grafana-sc-datasources
Warning  Failed      28m (x2 over 30m)    kubelet, openness  Error: context deadline exceeded
Warning  FailedSync  5m48s (x9 over 26m)  kubelet, openness  error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded

Issue installing OpenNESS onPremise controller - unsupported parameter for file module

Starting with a centos 7 minimal install (on controller and edge node hosts),
I have cloned the experience kit.
git clone https://github.com/open-ness/openness-experience-kits.git
I modified group_vars/all.yml with my git token
I edited inventory.ini to match my configurations (controller and one node)
I've setup passwordless ssh and hostnames

I've run the deploy_onprem_controller.sh script

It fails as follows:
file: roles/git_repo/tasks/gitconfig_bootstrap.yml
Error:
fatal: [mec-controller]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (file) module: access_time,modification_time Supported parameters include: attributes,backup,content,delimiter,diff_peek,directory_mode,follow,force,group,mode,original_basename,owner,path,recurse,regexp,remote_src,selevel,serole,setype,seuser,src,state,unsafe_writes,validate"}

It doesn't seem to accept the configured options:
modification_time: preserve
access_time: preserve

Deployment fails while building ovs-dpdk image

OpenNess version : 20.06

Deployment fails with following error while building the ovs-dpdk image .

RUNrpm -ivh ~/ovs-${OVS_VERSION}-${OVS_SUBVERSION}/rpm/rpmbuild/RPMS/x86_64/openvswitch-${OVS_VERSION}-${OVS_SUBVERSION}.el7.x86_64.rpm && rpm -ivh ~/ovs-${OVS_VERSION}-${OVS_SUBVERSION}/rpm/rpmbuild/RPMS/x86_64/openvswitch-devel-${OVS_VERSION}-${OVS_SUBVERSION}.el7.x86_64.rpm && rpm -ivh https://github.com/alauda/ovs/releases/download/${OVS_VERSION}-${OVS_SUBVERSION}/ovn-${OVS_VERSION}-${OVS_SUBVERSION}.el7.x86_64.rpm && rpm -ivh https://github.com/alauda/ovs/releases/download/${OVS_VERSION}-${OVS_SUBVERSION}/ovn-vtep-${OVS_VERSION}-${OVS_SUBVERSION}.el7.x86_64.rpm && rpm -ivh https://github.com/alauda/ovs/releases/download/${OVS_VERSION}-${OVS_SUBVERSION}/ovn-central-${OVS_VERSION}-${OVS_SUBVERSION}.el7.x86_64.rpm && rpm -ivh https://github.com/alauda/ovs/releases/download/${OVS_VERSION}-${OVS_SUBVERSION}/ovn-host-${OVS_VERSION}-${OVS_SUBVERSION}.el7.x86_64.rpm', u'\n', u' ---> Running in b21506f1bad2\n', u'\x1b[91merror: open of /root/ovs-2.12.0-5/rpm/rpmbuild/RPMS/x86_64/openvswitch-2.12.0-5.el7.x86_64.rpm failed: No such file or directory\

Seems it related to OVN (broken dependancy)
https://github.com/open-ness/edgenode/releases/tag/openness-20.06-ovn-fix

Attaching the complete ansible log

Some pods in completed state after deploying the controller and edge node

Hi,
I see there are some pods in completed state after deploying the controller and the edge node. Is this fine?

[root@controller ~]# kubectl get pods -o wide -A | grep Completed
kube-system descheduler-cronjob-1594902000-v5w9x 0/1 Completed 0 4m52s 10.16.0.26 node01
kube-system descheduler-cronjob-1594902120-tv5cz 0/1 Completed 0 2m52s 10.16.0.32 node01
kube-system descheduler-cronjob-1594902240-wgrqt 0/1 Completed 0 51s 10.16.0.33 node01
telemetry telemetry-collector-certs-hrf96 0/1 Completed 0 150m 10.16.0.12 node01

For the telemetry collector pod, I ran the pod describe command and the output shows a taint:

[root@controller ~]# kubectl describe pods telemetry-collector-certs-hrf96 -n telemetry
Name: telemetry-collector-certs-hrf96
Namespace: telemetry
Priority: 0
Node: node01/134.119.205.185
Start Time: Thu, 16 Jul 2020 14:03:30 +0200
Labels: controller-uid=287b90cf-5d41-42a7-9b65-b7dab0069d71
job-name=telemetry-collector-certs
name=telemetry-collector-certs
Annotations: ovn.kubernetes.io/allocated: true
ovn.kubernetes.io/cidr: 10.16.0.0/16
ovn.kubernetes.io/gateway: 10.16.0.1
ovn.kubernetes.io/ip_address: 10.16.0.12
ovn.kubernetes.io/logical_switch: ovn-default
ovn.kubernetes.io/mac_address: de:fd:f8:10:00:0d
Status: Succeeded
IP: 10.16.0.12
IPs:
IP: 10.16.0.12
Controlled By: Job/telemetry-collector-certs
Containers:
openssl:
Container ID: docker://517f8b221c82b5d1969c3e4e5648fdefde2c8bda70359314040bdfede98a3aea
Image: emberstack/openssl:latest
Image ID: docker-pullable://emberstack/openssl@sha256:1fad327428e28ac1138444fca06000c2bf04b5efb56e85440f5cbfb25e40a122
Port:
Host Port:
Command:
/bin/sh
-c
Args:
rm -Rf /root/certs/otel_collector && mkdir /root/certs/otel_collector && /root/certgen/entrypoint_tls.sh otel_collector /root/certs/otel_collector /root/CA && chmod 644 /root/certs/otel_collector/cert.pem /root/certs/otel_collector/key.pem && rm -Rf /root/certs/otel-collector.telemetry.svc && rm -rf /root/ca && mkdir /root/certs/otel-collector.telemetry.svc && /root/certgen/entrypoint_tls.sh otel-collector.telemetry.svc /root/certs/otel-collector.telemetry.svc /root/CA && chmod 644 /root/certs/otel-collector.telemetry.svc/cert.pem /root/certs/otel-collector.telemetry.svc/key.pem
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 16 Jul 2020 14:04:25 +0200
Finished: Thu, 16 Jul 2020 14:04:26 +0200
Ready: False
Restart Count: 0
Limits:
cpu: 100m
memory: 128Mi
Requests:
cpu: 100m
memory: 128Mi
Environment:
Mounts:
/root/CA from ca (rw)
/root/certgen from certgen (rw)
/root/certs from cert-vol (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5vv4m (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
cert-vol:
Type: HostPath (bare host directory volume)
Path: /etc/openness/certs/telemetry
HostPathType: DirectoryOrCreate
ca:
Type: Secret (a volume populated by a Secret)
SecretName: root-ca
Optional: false
certgen:
Type: Secret (a volume populated by a Secret)
SecretName: certgen
Optional: false
default-token-5vv4m:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5vv4m
Optional: false
QoS Class: Guaranteed
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Warning FailedScheduling default-scheduler 0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
Warning FailedScheduling default-scheduler 0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
Normal Scheduled default-scheduler Successfully assigned telemetry/telemetry-collector-certs-hrf96 to node01
Normal Pulling 29m kubelet, node01 Pulling image "emberstack/openssl:latest"
Normal Pulled 28m kubelet, node01 Successfully pulled image "emberstack/openssl:latest"
Normal Created 28m kubelet, node01 Created container openssl
Normal Started 28m kubelet, node01 Started container openssl

Is there a way to switch back to the older stable version of OpenNESS 20.06

Hi,

  1. I understood that the current version of OpenNESS 20.06 (released on Jun 30 includes 6 commits to master since this release) is not stable and have build issues.
  2. I have tried OpenNESS v20.06 on Jul 29, 2020, when there were only 3 commits and I was able to deploy it successfully.
  3. Are the following three commits responsible for the issue?
  • Sep 01, 2020 - Update build tag for release
  • Sep 02, 2020 - Change the way of building and installing OVS-DPDK (#492)
  • Sep 02, 2020 - Merge pull request #52 from open-ness/openness_rel_2006_ovn_fix

Could you please let us know, how can we switch back to the stable version of OpenNESS 20.06 (Jul 29, 2020)?

Below are the different ways that we tried to bring up OpenNESS:

  1. We removed the above three commits and tried to install OpenNESS 20.06 but got the below error.
TASK [kubernetes/cni/kubeovn/master : wait for running ovs-ovn & ovn-central pods] *******************************************************************
task path: /home/sysadmin/Downloads/openness-experience-kits-master/roles/kubernetes/cni/kubeovn/master/tasks/main.yml:149
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (30 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (29 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (28 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (27 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (26 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (25 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (24 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (23 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (22 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (21 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (20 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (19 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (18 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (17 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (16 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (15 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (14 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (13 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (12 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (11 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (10 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (9 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (8 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (7 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (6 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (5 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (4 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (3 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (2 retries left).
FAILED - RETRYING: wait for running ovs-ovn & ovn-central pods (1 retries left).
fatal: [controller]: FAILED! => {
"attempts": 30,
"changed": false,
"cmd": "set -o pipefail && kubectl get pods -n kube-system -o custom-columns=NAME:.metadata.name,STATUS:.status.phase --no-headers --field-selector spec.nodeName=controller | grep -E "ovs-ovn|ovn-central"\n",
"delta": "0:00:00.071730",
"end": "2020-09-07 17:51:59.310148",
"rc": 0,
"start": "2020-09-07 17:51:59.238418"
}

STDOUT:

ovn-central-74986486f9-5vc4t Pending
ovs-ovn-h7r99 Running

TASK [kubernetes/cni/kubeovn/master : events of ovs-ovn & ovn-central pods] **************************************************************************
task path: /home/sysadmin/Downloads/openness-experience-kits-master/roles/kubernetes/cni/kubeovn/master/tasks/main.yml:169
ok: [controller] => (item=ovs-ovn) => {
"ansible_loop_var": "item",
"changed": false,
"cmd": "set -o pipefail && kubectl describe pod -n kube-system $(kubectl get pods -n kube-system -o custom-columns=NAME:.metadata.name | grep ovs-ovn) | sed -n '/^Events:/,//p'\n",
"delta": "0:00:00.163775",
"end": "2020-09-07 17:51:59.641209",
"item": "ovs-ovn",
"rc": 0,
"start": "2020-09-07 17:51:59.477434"
}

STDOUT:

Events:
Type Reason Age From Message

[OpenNESS 2003] Controller's IP address becomes invalid after deploying the controller and edge nodes in the OpenNESS network edge mode using kubeovn

I am trying to deploy the OpenNESS Controller and Edge nodes in network edge mode using kubeovn as the CNI. After the successful deployment of the controller and edge nodes, the controller's IP address and the Kubernetes API server are not reachable. I guess it is due to the OVS-DPDK's default settings. I presume that there should be an option to assign the desired physical network interface to the DPDK. Please help me to resolve this.

Controller/Edge Nodes: CentOS Linux release 7.6.1810 (Core), Kernel version: 3.10.0-957.el7.x86_64

Status of pods on the Controller after deploying the OVS-DPDK and nfd-worker on the edge node
image

Not able to login Controller GUI on-premises openness edge controller

Post installation of on-premises edge controller, I am able to access the GUI but i am not able to login.

Error on browser "Login Failed Try again Later".

I am using "admin" as a default username and password is "pass" which is mentioned in file group_vars/all.yaml file.

please help me on this.

Edge Node Deployment with ovs-dpdk failing

I have been attempting to deploy a minimal network edge deployment, using the openness experience kit.
I have modified it to use ovncni rather than nts
I have also disabled any customisations for the controller and edge nodes (e.g. don't want a real time kernel etc.). I just want to verify end-to-end connectivity from my core network through an edge node to an edge client behind it.

Edge Node deployment fails with the following error
Error building ovs-dpdk - code: None, message: COPY failed: stat /var/lib/docker/tmp/docker-builder687389694/ovs-healthcheck.sh: no such file or directory

In attempting to debug the ovs docker issue I have tried to build the docker image directly/manually on the target edge node (I find the ansible logs nearly impossible to read, let alone debug)

It also gave the same outcome:

[root@mec-n86 dpdk-18.11.2]# cd /opt/dpdk-18-112.2
[root@mec-n86 dpdk-18.11.2]# docker build -f Dockerfile.dpdk -t ovs-dpdk .
        :                                :                                   :
Step 14/20 : RUN rpm -ivh ~/ovs-${OVS_VERSION}-${OVS_SUBVERSION}/rpm/rpmbuild/RPMS/x86_64/openvswitch-${OVS_VERSION}-${OVS_SUBVERSION}.el7.x86_64.rpm &&     rpm -ivh ~/ovs-${OVS_VERSION}-${OVS_SUBVERSION}/rpm/rpmbuild/RPMS/x86_64/openvswitch-devel-${OVS_VERSION}-${OVS_SUBVERSION}.el7.x86_64.rpm &&     rpm -ivh https://github.com/alauda/ovs/releases/download/${OVS_VERSION}-${OVS_SUBVERSION}/ovn-${OVS_VERSION}-${OVS_SUBVERSION}.el7.x86_64.rpm &&     rpm -ivh https://github.com/alauda/ovs/releases/download/${OVS_VERSION}-${OVS_SUBVERSION}/ovn-vtep-${OVS_VERSION}-${OVS_SUBVERSION}.el7.x86_64.rpm &&     rpm -ivh https://github.com/alauda/ovs/releases/download/${OVS_VERSION}-${OVS_SUBVERSION}/ovn-central-${OVS_VERSION}-${OVS_SUBVERSION}.el7.x86_64.rpm &&     rpm -ivh https://github.com/alauda/ovs/releases/download/${OVS_VERSION}-${OVS_SUBVERSION}/ovn-host-${OVS_VERSION}-${OVS_SUBVERSION}.el7.x86_64.rpm
---> Running in f9d673a8a752
Preparing...                          ########################################
Updating / installing...
openvswitch-2.12.0-4.el7              ########################################
Preparing...                          ########################################
Updating / installing...
openvswitch-devel-2.12.0-4.el7        ########################################
Retrieving https://github.com/alauda/ovs/releases/download/2.12.0-4/ovn-2.12.0-4.el7.x86_64.rpm
Preparing...                          ########################################
Updating / installing...
ovn-2.12.0-4.el7                      ########################################
Retrieving https://github.com/alauda/ovs/releases/download/2.12.0-4/ovn-vtep-2.12.0-4.el7.x86_64.rpm
Preparing...                          ########################################
Failed to get D-Bus connection: Operation not permitted
Updating / installing...
ovn-vtep-2.12.0-4.el7                 ########################################
Retrieving https://github.com/alauda/ovs/releases/download/2.12.0-4/ovn-central-2.12.0-4.el7.x86_64.rpm
Preparing...                          ########################################
Updating / installing...
ovn-central-2.12.0-4.el7              ###############Failed to get D-Bus connection: Operation not permitted
#########################
Retrieving https://github.com/alauda/ovs/releases/download/2.12.0-4/ovn-host-2.12.0-4.el7.x86_64.rpm
Preparing...                          ########################################
Failed to get D-Bus connection: Operation not permitted
Updating / installing...
ovn-host-2.12.0-4.el7                 ########################################
Removing intermediate container f9d673a8a752
---> dd56bd92b4cf
Step 15/20 : RUN mkdir -p /var/run/openvswitch &&     mkdir -p /etc/cni/net.d &&     mkdir -p /opt/cni/bin
---> Running in 5b9f42f93e73
Removing intermediate container 5b9f42f93e73
---> f3b6f97bbf66
Step 16/20 : COPY ovs-healthcheck.sh /root/ovs-healthcheck.sh
COPY failed: stat /var/lib/docker/tmp/docker-builder826022717/ovs-healthcheck.sh: no such file or directory
[root@mec-n86 dpdk-18.11.2]#

From digging a little deeper it now seems that it’s probable that the .dockerignore file is misconfigured.
As a crude workaround I manually modified it and added entries for the two files causing issues (the wildcard entry masks two required files):

# Add everything to the ignored
*
# Add following to whitelist:
!lib
!drivers
!x86_64-native-linuxapp-gcc
!configure_ovn_net.sh
!start_ovs_ovn.sh
#2020_05_25 higginse debugging build fail
!ovs-healthcheck.sh
!start-ovs-dpdk.sh

with these changes, the manual build succeeds.
Then to 'patch' the experience kit config temporarily:

  1. On the openness experience kit host I modified roles/kubernetes/cni/kubeovn/common/defaults/main.yml to also expect a (local) .dockerignore file
------------8<---    roles/kubernetes/cni/kubeovn/common/defaults/main.yml   ----8<----
kubeovn_download_files:
- "{{ kubeovn_raw_file_repo }}/{{ kubeovn_version }}/dist/images/Dockerfile.node"
- "{{ kubeovn_raw_file_repo }}/{{ kubeovn_version }}/dist/images/start-ovs.sh"
- "{{ kubeovn_raw_file_repo }}/{{ kubeovn_version }}/dist/images/ovs-healthcheck.sh"
- file:///opt/openness/ehiggins/.dockerignore

kubeovn_dockerimage_files_to_cp:
- Dockerfile.dpdk
- start-ovs-dpdk.sh
- ovs-healthcheck.sh
- .dockerignore
------------8<-------------------------------------------------------------------8<----
  1. Next, I manually modified the .dockerignore as previously mentioned to exclude ovs-healthcheck.sh and start-ovs-dpdk.sh
  2. I (again manually) copied the modified .dockerignore file to my target edge host (at the path I chose above - /opt/openness/ehiggins/)
  3. Finally I re-ran the deploy_ne.sh script with (nodes argument).

This time the script ran to completion.

Error while installing telemetry/tas

When installing in single node cluster, it throws the error on task to build TAS

TASK [telemetry/tas : build TAS] *************************************************************************************************
task path: /root/openness-experience-kits/roles/telemetry/tas/tasks/main.yml:154
fatal: [controller]: FAILED! => {
"changed": true,
"cmd": "source /etc/profile && make build",
"delta": "0:00:02.121611",
"end": "2020-07-02 00:04:40.110584",
"rc": 2,
"start": "2020-07-02 00:04:37.988973"
}

STDOUT:

CGO_ENABLED=0 GO111MODULE=on go build -ldflags="-s -w" -o ./bin/controller ./cmd/tas-policy-controller

STDERR:

go: finding modernc.org/golex v1.0.0
go: finding modernc.org/mathutil v1.0.0
go: finding modernc.org/xc v1.0.0
go: finding modernc.org/cc v1.0.0
go: finding modernc.org/strutil v1.0.0
go: modernc.org/[email protected]: git fetch -f https://gitlab.com/cznic/golex refs/heads/:refs/heads/ refs/tags/:refs/tags/ in /root/go/pkg/mod/cache/vcs/9aae2d4c6ee72eb1c6b65f7a51a0482327c927783dea53d4058803094c9d8039: exit status 128:
error: RPC failed; result=22, HTTP code = 404
fatal: The remote end hung up unexpectedly
go: modernc.org/[email protected]: git fetch -f https://gitlab.com/cznic/mathutil refs/heads/:refs/heads/ refs/tags/:refs/tags/ in /root/go/pkg/mod/cache/vcs/fb72eb2422fda47ac75ca695d44b06b82f3df3c5308e271486fca5e320879130: exit status 128:
error: RPC failed; result=22, HTTP code = 404
fatal: The remote end hung up unexpectedly
go: modernc.org/[email protected]: git fetch -f https://gitlab.com/cznic/xc refs/heads/:refs/heads/ refs/tags/:refs/tags/ in /root/go/pkg/mod/cache/vcs/29fc2f846f24ce3630fdd4abfc664927c4ad22f98a3589050facafa0991faada: exit status 128:
error: RPC failed; result=22, HTTP code = 404
fatal: The remote end hung up unexpectedly
go: modernc.org/[email protected]: git fetch -f https://gitlab.com/cznic/strutil refs/heads/:refs/heads/ refs/tags/:refs/tags/ in /root/go/pkg/mod/cache/vcs/f48599000415ab70c2f95dc7528c585820ed37ee15d27040a550487e83a41748: exit status 128:
error: RPC failed; result=22, HTTP code = 404
fatal: The remote end hung up unexpectedly
go: modernc.org/[email protected]: git fetch -f https://gitlab.com/cznic/cc refs/heads/:refs/heads/ refs/tags/:refs/tags/ in /root/go/pkg/mod/cache/vcs/3dac616a9d80602010c4792ef9c0e9d9812a1be8e70453e437e9792978075db6: exit status 128:
error: RPC failed; result=22, HTTP code = 404
fatal: The remote end hung up unexpectedly
go: error loading module requirements
make: *** [build] Error 1

MSG:

non-zero return code

RMD: Clients are not working with the new version of RMD

RMD 0.3 version was released. RMD 0.3 is not compatible with previous versions and upgrading to it breaks the old clients. Since OpenNESS use the master branch, all new deployments of OpenNESS with RMD enabled get broken automatically, as the latest public RMD operator is not compatible. We are working on a fix.

Error while building OVS-DPDK image

Got an error while deploying openNESS in the controller machine.
Please help me in resolving the issue. Thanks in advance.

How to reproduce the issue:

  1. Clone the latest version of openNESS using command "git clone https://github.com/open-ness/openness-experience-kits/"
  2. Configure the inventory.ini file
  3. Run the deploy script using command "sh deploy_ne.sh controller"

Error Message:

TASK [kubernetes/cni/kubeovn/common : build OVS-DPDK image (this may take some time...)] *************************************************************
task path: /home/sysadmin/Devika/openness-experience-kits/roles/kubernetes/cni/kubeovn/common/tasks/main.yml:89
fatal: [controller]: FAILED! => {
"changed": false
}

MSG:

Error building ovs-dpdk - code: 2, message: The command '/bin/sh -c cd ~ && curl -OL https://github.com/alauda/ovs/archive/$OVS_VERSION-$OVS_SUBVERSION.tar.gz && tar xf $OVS_VERSION-$OVS_SUBVERSION.tar.gz && rm -f $OVS_VERSION-$OVS_SUBVERSION.tar.gz && cd ovs-$OVS_VERSION-$OVS_SUBVERSION && sed -e 's/@Version@/0.0.1/' rhel/openvswitch-fedora.spec.in > /tmp/tmp_ovs.spec && yum-builddep -y /tmp/tmp_ovs.spec && ./boot.sh && ./configure --prefix=/usr/ --with-dpdk=$DPDK_BUILD && make -j$(nproc) && make rpm-fedora RPMBUILD_OPT="--with dpdk --without check" && make install' returned a non-zero code: 2, logs: [u'Step 1/18 : FROM centos:7', u'\n', u' ---> 7e6257c9f8d8\n', u'Step 2/18 : ENV PYTHONDONTWRITEBYTECODE yes', u'\n', u' ---> Running in d65667c98e92\n', u'Removing intermediate container d65667c98e92\n', u' ---> 843fe3be0b35\n', u'Step 3/18 : RUN yum install -y gcc gcc-c++ make autoconf automake libtool rpm-build PyYAML bind-utils openssl numactl-libs numactl-devel firewalld-filesystem libpcap hostname iproute strace socat nc unbound unbound-devel libpcap-devel libmnl-devel libibumad libibverbs-devel libibverbs libmlx5 libibverbs-utils dpdk-devel', u'\n', u' ---> Running in 3b40d57cf6f2\n', u'Loaded plugins: fastestmirror, ovl\n', u'Determining fastest mirrors\n', u' * base: mirror.myfahim.com\n', u' * extras: mirror.myfahim.com\n', u' * updates: mirror.myfahim.com\n', u'Package hostname-3.13-3.el7_7.1.x86_64 already installed and latest version\n', u'Resolving Dependencies\n', u'--> Running transaction check\n', u'---> Package PyYAML.x86_64 0:3.10-11.el7 will be installed\n', u'--> Processing Dependency: libyaml-0.so.2()(64bit) for package: PyYAML-3.10-11.el7.x86_64\n', u'---> Package autoconf.noarch 0:2.69-11.el7 will be installed\n', u'--> Processing Dependency: perl >= 5.006 for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: m4 >= 1.4.14 for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: perl(warnings) for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: perl(vars) for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: perl(strict) for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: perl(constant) for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: perl(Text::ParseWords) for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: perl(POSIX) for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: perl(IO::File) for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: perl(Getopt::Long) for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: perl(File::stat) for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: perl(File::Spec) for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: perl(File::Path) for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: perl(File::Find) for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: perl(File::Copy) for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: perl(File::Compare) for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: perl(File::Basename) for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: perl(Exporter) for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: perl(Errno) for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: perl(DynaLoader) for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: perl(Data::Dumper) for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: perl(Cwd) for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: perl(Class::Struct) for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: perl(Carp) for package: autoconf-2.69-11.el7.noarch\n', u'--> Processing Dependency: /usr/bin/perl for package: autoconf-2.69-11.el7.noarch\n', u'---> Package automake.noarch 0:1.13.4-3.el7 will be installed\n', u'--> Processing Dependency: perl(threads) for package: automake-1.13.4-3.el7.noarch\n', u'--> Processing Dependency: perl(Thread::Queue) for package: automake-1.13.4-3.el7.noarch\n', u'--> Processing Dependency: perl(TAP::Parser) for package: automake-1.13.4-3.el7.noarch\n', u'---> Package bind-utils.x86_64 32:9.11.4-16.P2.el7_8.6 will be installed\n', u'--> Processing Dependency: bind-libs-lite(x86-64) = 32:9.11.4-16.P2.el7_8.6 for package: 32:bind-utils-9.11.4-16.P2.el7_8.6.x86_64\n', u'--> Processing Dependency: bind-libs(x86-64) = 32:9.11.4-16.P2.el7_8.6 for package: 32:bind-utils-9.11.4-16.P2.el7_8.6.x86_64\n', u'--> Processing Dependency: liblwres.so.160()(64bit) for package: 32:bind-utils-9.11.4-16.P2.el7_8.6.x86_64\n', u'--> Processing Dependency: libisccfg.so.160()(64bit) for package: 32:bind-utils-9.11.4-16.P2.el7_8.6.x86_64\n', u'--> Processing Dependency: libisc.so.169()(64bit) for package: 32:bind-utils-9.11.4-16.P2.el7_8.6.x86_64\n', u'--> Processing Dependency: libirs.so.160()(64bit) for package: 32:bind-utils-9.11.4-16.P2.el7_8.6.x86_64\n--> Processing Dependency: libdns.so.1102()(64bit) for package: 32:bind-utils-9.11.4-16.P2.el7_8.6.x86_64\n--> Processing Dependency: libbind9.so.160()(64bit) for package: 32:bind-utils-9.11.4-16.P2.el7_8.6.x86_64\n', u'--> Processing Dependency: libGeoIP.so.1()(64bit) for package: 32:bind-utils-9.11.4-16.P2.el7_8.6.x86_64\n', u'---> Package dpdk-devel.x86_64 0:18.11.8-1.el7_8 will be installed\n', u'--> Processing Dependency: dpdk(x86-64) = 18.11.8-1.el7_8 for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_vhost.so.4()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_ring.so.2()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_pmd_virtio.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n--> Processing Dependency: librte_pmd_vhost.so.2()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_pmd_vdev_netvsc.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_pmd_tap.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_pmd_ring.so.2()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_pmd_qede.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_pmd_nfp.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_pmd_netvsc.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_pmd_mlx5.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_pmd_mlx4.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_pmd_ixgbe.so.2()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_pmd_i40e.so.2()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_pmd_failsafe.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_pmd_enic.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_pmd_e1000.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_pmd_bnxt.so.2()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_pdump.so.2()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_pci.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_net.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_metrics.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_meter.so.2()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_mempool_stack.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_mempool_ring.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_mempool_bucket.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_mempool.so.5()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_member.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_mbuf.so.4()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_latencystats.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_kvargs.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_ip_frag.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_hash.so.2()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_gso.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_gro.so.1()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_ethdev.so.11()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_eal.so.9()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_cmdline.so.2()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_bus_vmbus.so.2()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_bus_vdev.so.2()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_bus_pci.so.2()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'--> Processing Dependency: librte_bitratestats.so.2()(64bit) for package: dpdk-devel-18.11.8-1.el7_8.x86_64\n', u'---> Package firewalld-filesystem.noarch 0:0.6.3-8.el7_8.1 will be installed\n', u'---> Package gcc.x86_64 0:4.8.5-39.el7 will be installed\n', u'--> Processing Dependency: libgomp = 4.8.5-39.el7 for package: gcc-4.8.5-39.el7.x86_64\n', u'--> Processing Dependency: cpp = 4.8.5-39.el7 for package: gcc-4.8.5-39.el7.x86_64\n', u'--> Processing Dependency: glibc-devel >= 2.2.90-12 for package: gcc-4.8.5-39.el7.x86_64\n', u'--> Processing Dependency: libmpfr.so.4()(64bit) for package: gcc-4.8.5-39.el7.x86_64\n', u'--> Processing Dependency: libmpc.so.3()(64bit) for package: gcc-4.8.5-39.el7.x86_64\n', u'--> Processing Dependency: libgomp.so.1()(64bit) for package: gcc-4.8.5-39.el7.x86_64\n', u'---> Package gcc-c++.x86_64 0:4.8.5-39.el7 will be installed\n', u'--> Processing Dependency: libstdc++-devel = 4.8.5-39.el7 for package: gcc-c++-4.8.5-39.el7.x86_64\n', u'---> Package iproute.x86_64 0:4.11.0-25.el7_7.2 will be installed\n', u'--> Processing Dependency: libmnl.so.0(LIBMNL_1.0)(64bit) for package: iproute-4.11.0-25.el7_7.2.x86_64\n', u'--> Processing Dependency: libxtables.so.10()(64bit) for package: iproute-4.11.0-25.el7_7.2.x86_64\n', u'--> Processing Dependency: libmnl.so.0()(64bit) for package: iproute-4.11.0-25.el7_7.2.x86_64\n', u'---> Package libibumad.x86_64 0:22.4-4.el7_8 will be installed\n', u'--> Processing Dependency: rdma-core(x86-64) = 22.4-4.el7_8 for package: libibumad-22.4-4.el7_8.x86_64\n', u'---> Package libibverbs.x86_64 0:22.4-4.el7_8 will be installed\n', u'--> Processing Dependency: libnl-route-3.so.200(libnl_3)(64bit) for package: libibverbs-22.4-4.el7_8.x86_64\n', u'--> Processing Dependency: libnl-3.so.200(libnl_3)(64bit) for package: libibverbs-22.4-4.el7_8.x86_64\n', u'--> Processing Dependency: libnl-route-3.so.200()(64bit) for package: libibverbs-22.4-4.el7_8.x86_64\n', u'--> Processing Dependency: libnl-3.so.200()(64bit) for package: libibverbs-22.4-4.el7_8.x86_64\n', u'---> Package libibverbs-utils.x86_64 0:22.4-4.el7_8 will be installed\n', u'---> Package libmnl-devel.x86_64 0:1.0.3-7.el7 will be installed\n', u'---> Package libpcap.x86_64 14:1.5.3-12.el7 will be installed\n', u'---> Package libpcap-devel.x86_64 14:1.5.3-12.el7 will be installed\n', u'---> Package libtool.x86_64 0:2.4.2-22.el7_3 will be installed\n', u'---> Package make.x86_64 1:3.82-24.el7 will be installed\n', u'---> Package nmap-ncat.x86_64 2:6.40-19.el7 will be installed\n', u'---> Package numactl-devel.x86_64 0:2.0.12-5.el7 will be installed\n', u'---> Package numactl-libs.x86_64 0:2.0.12-5.el7 will be installed\n', u'---> Package openssl.x86_64 1:1.0.2k-19.el7 will be installed\n', u'---> Package rdma-core-devel.x86_64 0:22.4-4.el7_8 will be installed\n', u'--> Processing Dependency: librdmacm = 22.4-4.el7_8 for package: rdma-core-devel-22.4-4.el7_8.x86_64\n', u'--> Processing Dependency: ibacm = 22.4-4.el7_8 for package: rdma-core-devel-22.4-4.el7_8.x86_64\n', u'--> Processing Dependency: librdmacm.so.1()(64bit) for package: rdma-core-devel-22.4-4.el7_8.x86_64\n', u'---> Package rpm-build.x86_64 0:4.11.3-43.el7 will be installed\n', u'--> Processing Dependency: patch >= 2.5 for package: rpm-build-4.11.3-43.el7.x86_64\n', u'--> Processing Dependency: elfutils >= 0.128 for package: rpm-build-4.11.3-43.el7.x86_64\n', u'--> Processing Dependency: unzip for package: rpm-build-4.11.3-43.el7.x86_64\n', u'--> Processing Dependency: system-rpm-config for package: rpm-build-4.11.3-43.el7.x86_64\n', u'--> Processing Dependency: perl(File::Temp) for package: rpm-build-4.11.3-43.el7.x86_64\n', u'--> Processing Dependency: file for package: rpm-build-4.11.3-43.el7.x86_64\n', u'--> Processing Dependency: bzip2 for package: rpm-build-4.11.3-43.el7.x86_64\n', u'--> Processing Dependency: /usr/bin/gdb-add-index for package: rpm-build-4.11.3-43.el7.x86_64\n', u'---> Package socat.x86_64 0:1.7.3.2-2.el7 will be installed\n', u'--> Processing Dependency: libwrap.so.0()(64bit) for package: socat-1.7.3.2-2.el7.x86_64\n', u'---> Package strace.x86_64 0:4.24-4.el7 will be installed\n', u'---> Package unbound.x86_64 0:1.6.6-5.el7_8 will be installed\n', u'--> Processing Dependency: unbound-libs(x86-64) = 1.6.6-5.el7_8 for package: unbound-1.6.6-5.el7_8.x86_64\n', u'--> Processing Dependency: libunbound.so.2()(64bit) for package: unbound-1.6.6-5.el7_8.x86_64\n', u'--> Processing Dependency: libevent-2.0.so.5()(64bit) for package: unbound-1.6.6-5.el7_8.x86_64\n', u'---> Package unbound-devel.x86_64 0:1.6.6-5.el7_8 will be installed\n', u'--> Processing Dependency: pkgconfig(python) for package: unbound-devel-1.6.6-5.el7_8.x86_64\n', u'--> Processing Dependency: pkgconfig(libssl) for package: unbound-devel-1.6.6-5.el7_8.x86_64\n', u'--> Processing Dependency: pkgconfig(libevent) for package: unbound-devel-1.6.6-5.el7_8.x86_64\n', u'--> Processing Dependency: pkgconfig(libcrypto) for package: unbound-devel-1.6.6-5.el7_8.x86_64\n', u'--> Processing Dependency: openssl-devel for package: unbound-devel-1.6.6-5.el7_8.x86_64\n', u'--> Running transaction check\n', u'---> Package GeoIP.x86_64 0:1.5.0-14.el7 will be installed\n', u'---> Package bind-libs.x86_64 32:9.11.4-16.P2.el7_8.6 will be installed\n', u'---> Package bind-libs-lite.x86_64 32:9.11.4-16.P2.el7_8.6 will be installed\n', u'---> Package bzip2.x86_64 0:1.0.6-13.el7 will be installed\n', u'---> Package cpp.x86_64 0:4.8.5-39.el7 will be installed\n', u'---> Package dpdk.x86_64 0:18.11.8-1.el7_8 will be installed\n', u'---> Package elfutils.x86_64 0:0.176-4.el7 will be installed\n', u'---> Package file.x86_64 0:5.11-36.el7 will be installed\n', u'---> Package gdb.x86_64 0:7.6.1-119.el7 will be installed\n', u'---> Package glibc-devel.x86_64 0:2.17-307.el7.1 will be installed\n', u'--> Processing Dependency: glibc-headers = 2.17-307.el7.1 for package: glibc-devel-2.17-307.el7.1.x86_64\n', u'--> Processing Dependency: glibc-headers for package: glibc-devel-2.17-307.el7.1.x86_64\n', u'---> Package ibacm.x86_64 0:22.4-4.el7_8 will be installed\n', u'---> Package iptables.x86_64 0:1.4.21-34.el7 will be installed\n', u'--> Processing Dependency: libnfnetlink.so.0()(64bit) for package: iptables-1.4.21-34.el7.x86_64\n', u'--> Processing Dependency: libnetfilter_conntrack.so.3()(64bit) for package: iptables-1.4.21-34.el7.x86_64\n', u'---> Package libevent.x86_64 0:2.0.21-4.el7 will be installed\n', u'---> Package libevent-devel.x86_64 0:2.0.21-4.el7 will be installed\n', u'---> Package libgomp.x86_64 0:4.8.5-39.el7 will be installed\n', u'---> Package libmnl.x86_64 0:1.0.3-7.el7 will be installed\n', u'---> Package libmpc.x86_64 0:1.0.1-3.el7 will be installed\n', u'---> Package libnl3.x86_64 0:3.2.28-4.el7 will be installed\n', u'---> Package librdmacm.x86_64 0:22.4-4.el7_8 will be installed\n', u'---> Package libstdc++-devel.x86_64 0:4.8.5-39.el7 will be installed\n', u'---> Package libyaml.x86_64 0:0.1.4-11.el7_0 will be installed\n', u'---> Package m4.x86_64 0:1.4.16-10.el7 will be installed\n', u'---> Package mpfr.x86_64 0:3.1.1-4.el7 will be installed\n', u'---> Package openssl-devel.x86_64 1:1.0.2k-19.el7 will be installed\n', u'--> Processing Dependency: zlib-devel(x86-64) for package: 1:openssl-devel-1.0.2k-19.el7.x86_64\n', u'--> Processing Dependency: krb5-devel(x86-64) for package: 1:openssl-devel-1.0.2k-19.el7.x86_64\n', u'---> Package patch.x86_64 0:2.7.1-12.el7_7 will be installed\n', u'---> Package perl.x86_64 4:5.16.3-295.el7 will be installed\n', u'--> Processing Dependency: perl-libs = 4:5.16.3-295.el7 for package: 4:perl-5.16.3-295.el7.x86_64\n', u'--> Processing Dependency: perl(Socket) >= 1.3 for package: 4:perl-5.16.3-295.el7.x86_64\n', u'--> Processing Dependency: perl(Scalar::Util) >= 1.10 for package: 4:perl-5.16.3-295.el7.x86_64\n', u'--> Processing Dependency: perl-macros for package: 4:perl-5.16.3-295.el7.x86_64\n', u'--> Processing Dependency: perl-libs for package: 4:perl-5.16.3-295.el7.x86_64\n', u'--> Processing Dependency: perl(threads::shared) for package: 4:perl-5.16.3-295.el7.x86_64\n', u'--> Processing Dependency: perl(Time::Local) for package: 4:perl-5.16.3-295.el7.x86_64\n', u'--> Processing Dependency: perl(Time::HiRes) for package: 4:perl-5.16.3-295.el7.x86_64\n', u'--> Processing Dependency: perl(Storable) for package: 4:perl-5.16.3-295.el7.x86_64\n', u'--> Processing Dependency: perl(Socket) for package: 4:perl-5.16.3-295.el7.x86_64\n', u'--> Processing Dependency: perl(Scalar::Util) for package: 4:perl-5.16.3-295.el7.x86_64\n', u'--> Processing Dependency: perl(Pod::Simple::XHTML) for package: 4:perl-5.16.3-295.el7.x86_64\n', u'--> Processing Dependency: perl(Pod::Simple::Search) for package: 4:perl-5.16.3-295.el7.x86_64\n', u'--> Processing Dependency: perl(Filter::Util::Call) for package: 4:perl-5.16.3-295.el7.x86_64\n', u'--> Processing Dependency: libperl.so()(64bit) for package: 4:perl-5.16.3-295.el7.x86_64\n', u'---> Package perl-Carp.noarch 0:1.26-244.el7 will be installed\n', u'---> Package perl-Data-Dumper.x86_64 0:2.145-3.el7 will be installed\n', u'---> Package perl-Exporter.noarch 0:5.68-3.el7 will be installed\n', u'---> Package perl-File-Path.noarch 0:2.09-2.el7 will be installed\n', u'---> Package perl-File-Temp.noarch 0:0.23.01-3.el7 will be installed\n', u'---> Package perl-Getopt-Long.noarch 0:2.40-3.el7 will be installed\n', u'--> Processing Dependency: perl(Pod::Usage) >= 1.14 for package: perl-Getopt-Long-2.40-3.el7.noarch\n', u'---> Package perl-PathTools.x86_64 0:3.40-5.el7 will be installed\n', u'---> Package perl-Test-Harness.noarch 0:3.28-3.el7 will be installed\n', u'---> Package perl-Text-ParseWords.noarch 0:3.29-4.el7 will be installed\n', u'---> Package perl-Thread-Queue.noarch 0:3.02-2.el7 will be installed\n', u'---> Package perl-constant.noarch 0:1.27-2.el7 will be installed\n', u'---> Package perl-threads.x86_64 0:1.87-4.el7 will be installed\n', u'---> Package python-devel.x86_64 0:2.7.5-88.el7 will be installed\n', u'--> Processing Dependency: python2-rpm-macros > 3-30 for package: python-devel-2.7.5-88.el7.x86_64\n', u'--> Processing Dependency: python-rpm-macros > 3-30 for package: python-devel-2.7.5-88.el7.x86_64\n', u'---> Package rdma-core.x86_64 0:22.4-4.el7_8 will be installed\n', u'--> Processing Dependency: pciutils for package: rdma-core-22.4-4.el7_8.x86_64\n', u'--> Processing Dependency: initscripts for package: rdma-core-22.4-4.el7_8.x86_64\n', u'---> Package redhat-rpm-config.noarch 0:9.1.0-88.el7.centos will be installed\n', u'--> Processing Dependency: dwz >= 0.4 for package: redhat-rpm-config-9.1.0-88.el7.centos.noarch\n', u'--> Processing Dependency: zip for package: redhat-rpm-config-9.1.0-88.el7.centos.noarch\n', u'--> Processing Dependency: python-srpm-macros for package: redhat-rpm-config-9.1.0-88.el7.centos.noarch\n', u'--> Processing Dependency: perl-srpm-macros for package: redhat-rpm-config-9.1.0-88.el7.centos.noarch\n', u'---> Package tcp_wrappers-libs.x86_64 0:7.6-77.el7 will be installed\n', u'---> Package unbound-libs.x86_64 0:1.6.6-5.el7_8 will be installed\n', u'---> Package unzip.x86_64 0:6.0-21.el7 will be installed\n', u'--> Running transaction check\n', u'---> Package dwz.x86_64 0:0.11-3.el7 will be installed\n', u'---> Package glibc-headers.x86_64 0:2.17-307.el7.1 will be installed\n', u'--> Processing Dependency: kernel-headers >= 2.2.1 for package: glibc-headers-2.17-307.el7.1.x86_64\n', u'--> Processing Dependency: kernel-headers for package: glibc-headers-2.17-307.el7.1.x86_64\n', u'---> Package initscripts.x86_64 0:9.49.49-1.el7 will be installed\n', u'--> Processing Dependency: sysvinit-tools >= 2.87-5 for package: initscripts-9.49.49-1.el7.x86_64\n', u'---> Package krb5-devel.x86_64 0:1.15.1-46.el7 will be installed\n', u'--> Processing Dependency: libkadm5(x86-64) = 1.15.1-46.el7 for package: krb5-devel-1.15.1-46.el7.x86_64\n', u'--> Processing Dependency: libverto-devel for package: krb5-devel-1.15.1-46.el7.x86_64\n', u'--> Processing Dependency: libselinux-devel for package: krb5-devel-1.15.1-46.el7.x86_64\n', u'--> Processing Dependency: libcom_err-devel for package: krb5-devel-1.15.1-46.el7.x86_64\n', u'--> Processing Dependency: keyutils-libs-devel for package: krb5-devel-1.15.1-46.el7.x86_64\n', u'---> Package libnetfilter_conntrack.x86_64 0:1.0.6-1.el7_3 will be installed\n', u'---> Package libnfnetlink.x86_64 0:1.0.1-4.el7 will be installed\n', u'---> Package pciutils.x86_64 0:3.5.1-3.el7 will be installed\n', u'--> Processing Dependency: pciutils-libs = 3.5.1-3.el7 for package: pciutils-3.5.1-3.el7.x86_64\n', u'--> Processing Dependency: libpci.so.3(LIBPCI_3.5)(64bit) for package: pciutils-3.5.1-3.el7.x86_64\n', u'--> Processing Dependency: libpci.so.3(LIBPCI_3.3)(64bit) for package: pciutils-3.5.1-3.el7.x86_64\n', u'--> Processing Dependency: libpci.so.3(LIBPCI_3.1)(64bit) for package: pciutils-3.5.1-3.el7.x86_64\n', u'--> Processing Dependency: libpci.so.3(LIBPCI_3.0)(64bit) for package: pciutils-3.5.1-3.el7.x86_64\n', u'--> Processing Dependency: hwdata for package: pciutils-3.5.1-3.el7.x86_64\n', u'--> Processing Dependency: libpci.so.3()(64bit) for package: pciutils-3.5.1-3.el7.x86_64\n', u'---> Package perl-Filter.x86_64 0:1.49-3.el7 will be installed\n', u'---> Package perl-Pod-Simple.noarch 1:3.28-4.el7 will be installed\n', u'--> Processing Dependency: perl(Pod::Escapes) >= 1.04 for package: 1:perl-Pod-Simple-3.28-4.el7.noarch\n', u'--> Processing Dependency: perl(Encode) for package: 1:perl-Pod-Simple-3.28-4.el7.noarch\n', u'---> Package perl-Pod-Usage.noarch 0:1.63-3.el7 will be installed\n', u'--> Processing Dependency: perl(Pod::Text) >= 3.15 for package: perl-Pod-Usage-1.63-3.el7.noarch\n', u'--> Processing Dependency: perl-Pod-Perldoc for package: perl-Pod-Usage-1.63-3.el7.noarch\n', u'---> Package perl-Scalar-List-Utils.x86_64 0:1.27-248.el7 will be installed\n', u'---> Package perl-Socket.x86_64 0:2.010-5.el7 will be installed\n', u'---> Package perl-Storable.x86_64 0:2.45-3.el7 will be installed\n', u'---> Package perl-Time-HiRes.x86_64 4:1.9725-3.el7 will be installed\n', u'---> Package perl-Time-Local.noarch 0:1.2300-2.el7 will be installed\n', u'---> Package perl-libs.x86_64 4:5.16.3-295.el7 will be installed\n', u'---> Package perl-macros.x86_64 4:5.16.3-295.el7 will be installed\n', u'---> Package perl-srpm-macros.noarch 0:1-8.el7 will be installed\n', u'---> Package perl-threads-shared.x86_64 0:1.43-6.el7 will be installed\n', u'---> Package python-rpm-macros.noarch 0:3-32.el7 will be installed\n', u'---> Package python-srpm-macros.noarch 0:3-32.el7 will be installed\n', u'---> Package python2-rpm-macros.noarch 0:3-32.el7 will be installed\n', u'---> Package zip.x86_64 0:3.0-11.el7 will be installed\n', u'---> Package zlib-devel.x86_64 0:1.2.7-18.el7 will be installed\n', u'--> Running transaction check\n', u'---> Package hwdata.x86_64 0:0.252-9.5.el7 will be installed\n', u'---> Package kernel-headers.x86_64 0:3.10.0-1127.19.1.el7 will be installed\n', u'---> Package keyutils-libs-devel.x86_64 0:1.5.8-3.el7 will be installed\n', u'---> Package libcom_err-devel.x86_64 0:1.42.9-17.el7 will be installed\n', u'---> Package libkadm5.x86_64 0:1.15.1-46.el7 will be installed\n', u'---> Package libselinux-devel.x86_64 0:2.5-15.el7 will be installed\n', u'--> Processing Dependency: libsepol-devel(x86-64) >= 2.5-10 for package: libselinux-devel-2.5-15.el7.x86_64\n', u'--> Processing Dependency: pkgconfig(libsepol) for package: libselinux-devel-2.5-15.el7.x86_64\n', u'--> Processing Dependency: pkgconfig(libpcre) for package: libselinux-devel-2.5-15.el7.x86_64\n', u'---> Package libverto-devel.x86_64 0:0.2.5-4.el7 will be installed\n', u'---> Package pciutils-libs.x86_64 0:3.5.1-3.el7 will be installed\n', u'---> Package perl-Encode.x86_64 0:2.51-7.el7 will be installed\n', u'---> Package perl-Pod-Escapes.noarch 1:1.04-295.el7 will be installed\n', u'---> Package perl-Pod-Perldoc.noarch 0:3.20-4.el7 will be installed\n', u'--> Processing Dependency: perl(parent) for package: perl-Pod-Perldoc-3.20-4.el7.noarch\n', u'--> Processing Dependency: perl(HTTP::Tiny) for package: perl-Pod-Perldoc-3.20-4.el7.noarch\n', u'--> Processing Dependency: groff-base for package: perl-Pod-Perldoc-3.20-4.el7.noarch\n', u'---> Package perl-podlators.noarch 0:2.5.1-3.el7 will be installed\n', u'---> Package sysvinit-tools.x86_64 0:2.88-14.dsf.el7 will be installed\n', u'--> Running transaction check\n', u'---> Package groff-base.x86_64 0:1.22.2-8.el7 will be installed\n', u'---> Package libsepol-devel.x86_64 0:2.5-10.el7 will be installed\n', u'---> Package pcre-devel.x86_64 0:8.32-17.el7 will be installed\n', u'---> Package perl-HTTP-Tiny.noarch 0:0.033-3.el7 will be installed\n', u'---> Package perl-parent.noarch 1:0.225-244.el7 will be installed\n', u'--> Finished Dependency Resolution\n', u'\nDependencies Resolved\n', u'\n================================================================================\n Package Arch Version Repository\n Size\n================================================================================\nInstalling:\n PyYAML x86_64 3.10-11.el7 base 153 k\n autoconf noarch 2.69-11.el7 base 701 k\n automake noarch 1.13.4-3.el7 base 679 k\n bind-utils x86_64 32:9.11.4-16.P2.el7_8.6 updates 259 k\n dpdk-devel x86_64 18.11.8-1.el7_8 extras 347 k\n firewalld-filesystem noarch 0.6.3-8.el7_8.1 updates 51 k\n gcc x86_64 4.8.5-39.el7 base 16 M\n gcc-c++ x86_64 4.8.5-39.el7 base 7.2 M\n iproute x86_64 4.11.0-25.el7_7.2 base 803 k\n libibumad x86_64 22.4-4.el7_8 updates 24 k\n libibverbs x86_64 22.4-4.el7_8 updates 268 k\n libibverbs-utils x86_64 22.4-4.el7_8 updates 61 k\n libmnl-devel x86_64 1.0.3-7.el7 base 32 k\n libpcap x86_64 14:1.5.3-12.el7 base 139 k\n libpcap-devel x86_64 14:1.5.3-12.el7 base 118 k\n libtool x86_64 2.4.2-22.el7_3 base 588 k\n make x86_64 1:3.82-24.el7 base 421 k\n nmap-ncat x86_64 2:6.40-19.el7 base 206 k\n numactl-devel x86_64 2.0.12-5.el7 base 24 k\n numactl-libs x86_64 2.0.12-5.el7 base 30 k\n openssl x86_64 1:1.0.2k-19.el7 base 493 k\n rdma-core-devel x86_64 22.4-4.el7_8 updates 254 k\n rpm-build x86_64 4.11.3-43.el7 base 149 k\n socat x86_64 1.7.3.2-2.el7 base 290 k\n strace x86_64 4.24-4.el7 base 901 k\n unbound x86_64 1.6.6-5.el7_8 updates 674 k\n unbound-devel x86_64 1.6.6-5.el7_8 updates 45 k\nInstalling for dependencies:\n GeoIP x86_64 1.5.0-14.el7 base 1.5 M\n bind-libs x86_64 32:9.11.4-16.P2.el7_8.6 updates 156 k\n bind-libs-lite x86_64 32:9.11.4-16.P2.el7_8.6 updates 1.1 M\n bzip2 x86_64 1.0.6-13.el7 base 52 k\n cpp x86_64 4.8.5-39.el7 base 5.9 M\n dpdk x86_64 18.11.8-1.el7_8 extras 1.7 M\n dwz x86_64 0.11-3.el7 base 99 k\n elfutils x86_64 0.176-4.el7 base 308 k\n file x86_64 5.11-36.el7 base 57 k\n gdb x86_64 7.6.1-119.el7 base 2.4 M\n glibc-devel x86_64 2.17-307.el7.1 base 1.1 M\n glibc-headers x86_64 2.17-307.el7.1 base 689 k\n groff-base x86_64 1.22.2-8.el7 base 942 k\n hwdata x86_64 0.252-9.5.el7 base 2.4 M\n ibacm x86_64 22.4-4.el7_8 updates 83 k\n initscripts x86_64 9.49.49-1.el7 base 440 k\n iptables x86_64 1.4.21-34.el7 base 432 k\n kernel-headers x86_64 3.10.0-1127.19.1.el7 updates 9.0 M\n keyutils-libs-devel x86_64 1.5.8-3.el7 base 37 k\n krb5-devel x86_64 1.15.1-46.el7 base 272 k\n libcom_err-devel x86_64 1.42.9-17.el7 base 32 k\n libevent x86_64 2.0.21-4.el7 base 214 k\n libevent-devel x86_64 2.0.21-4.el7 base 85 k\n libgomp x86_64 4.8.5-39.el7 base 158 k\n libkadm5 x86_64 1.15.1-46.el7 base 179 k\n libmnl x86_64 1.0.3-7.el7 base 23 k\n libmpc x86_64 1.0.1-3.el7 base 51 k\n libnetfilter_conntrack x86_64 1.0.6-1.el7_3 base 55 k\n libnfnetlink x86_64 1.0.1-4.el7 base 26 k\n libnl3 x86_64 3.2.28-4.el7 base 278 k\n librdmacm x86_64 22.4-4.el7_8 updates 64 k\n libselinux-devel x86_64 2.5-15.el7 base 187 k\n libsepol-devel x86_64 2.5-10.el7 base 77 k\n libstdc++-devel x86_64 4.8.5-39.el7 base 1.5 M\n libverto-devel x86_64 0.2.5-4.el7 base 12 k\n libyaml x86_64 0.1.4-11.el7_0 base 55 k\n m4 x86_64 1.4.16-10.el7 base 256 k\n mpfr x86_64 3.1.1-4.el7 base 203 k\n openssl-devel x86_64 1:1.0.2k-19.el7 base 1.5 M\n patch x86_64 2.7.1-12.el7_7 base 111 k\n pciutils x86_64 3.5.1-3.el7 base 93 k\n pciutils-libs x86_64 3.5.1-3.el7 base 46 k\n pcre-devel x86_64 8.32-17.el7 base 480 k\n perl x86_64 4:5.16.3-295.el7 base 8.0 M\n perl-Carp noarch 1.26-244.el7 base 19 k\n perl-Data-Dumper x86_64 2.145-3.el7 base 47 k\n perl-Encode x86_64 2.51-7.el7 base 1.5 M\n perl-Exporter noarch 5.68-3.el7 base 28 k\n perl-File-Path noarch 2.09-2.el7 base 26 k\n perl-File-Temp noarch 0.23.01-3.el7 base 56 k\n perl-Filter x86_64 1.49-3.el7 base 76 k\n perl-Getopt-Long noarch 2.40-3.el7 base 56 k\n perl-HTTP-Tiny noarch 0.033-3.el7 base 38 k\n perl-PathTools x86_64 3.40-5.el7 base 82 k\n perl-Pod-Escapes noarch 1:1.04-295.el7 base 51 k\n perl-Pod-Perldoc noarch 3.20-4.el7 base 87 k\n perl-Pod-Simple noarch 1:3.28-4.el7 base 216 k\n perl-Pod-Usage noarch 1.63-3.el7 base 27 k\n perl-Scalar-List-Utils x86_64 1.27-248.el7 base 36 k\n perl-Socket x86_64 2.010-5.el7 base 49 k\n perl-Storable x86_64 2.45-3.el7 base 77 k\n perl-Test-Harness noarch 3.28-3.el7 base 302 k\n perl-Text-ParseWords noarch 3.29-4.el7 base 14 k\n perl-Thread-Queue noarch 3.02-2.el7 base 17 k\n perl-Time-HiRes x86_64 4:1.9725-3.el7 base 45 k\n perl-Time-Local noarch 1.2300-2.el7 base 24 k\n perl-constant noarch 1.27-2.el7 base 19 k\n perl-libs x86_64 4:5.16.3-295.el7 base 689 k\n perl-macros x86_64 4:5.16.3-295.el7 base 44 k\n perl-parent noarch 1:0.225-244.el7 base 12 k\n perl-podlators noarch 2.5.1-3.el7 base 112 k\n perl-srpm-macros noarch 1-8.el7 base 4.6 k\n perl-threads x86_64 1.87-4.el7 base 49 k\n perl-threads-shared x86_64 1.43-6.el7 base 39 k\n python-devel x86_64 2.7.5-88.el7 base 398 k\n python-rpm-macros noarch 3-32.el7 base 8.8 k\n python-srpm-macros noarch 3-32.el7 base 8.4 k\n python2-rpm-macros noarch 3-32.el7 base 7.7 k\n rdma-core x86_64 22.4-4.el7_8 updates 51 k\n redhat-rpm-config noarch 9.1.0-88.el7.centos base 81 k\n sysvinit-tools x86_64 2.88-14.dsf.el7 base 63 k\n tcp_wrappers-libs x86_64 7.6-77.el7 base 66 k\n unbound-libs x86_64 1.6.6-5.el7_8 updates 406 k\n unzip x86_64 6.0-21.el7 base 171 k\n zip x86_64 3.0-11.el7 base 260 k\n zlib-devel x86_64 1.2.7-18.el7 base 50 k\n\nTransaction Summary\n================================================================================\nInstall 27 Packages (+86 Dependent packages)\n\n', u'Total download size: 78 M\n', u'Installed size: 196 M\n', u'Downloading packages:\n', u'\x1b[91mwarning: /var/cache/yum/x86_64/7/base/packages/automake-1.13.4-3.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY\n\x1b[0m', u'Public key for automake-1.13.4-3.el7.noarch.rpm is not installed\n', u'Public key for bind-libs-9.11.4-16.P2.el7_8.6.x86_64.rpm is not installed\n', u'Public key for dpdk-devel-18.11.8-1.el7_8.x86_64.rpm is not installed\n', u"\x1b[91mhttp://mirror.xeonbd.com/centos/7.8.2003/os/x86_64/Packages/cpp-4.8.5-39.el7.x86_64.rpm: [Errno 12] Timeout on http://mirror.xeonbd.com/centos/7.8.2003/os/x86_64/Packages/cpp-4.8.5-39.el7.x86_64.rpm: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds')\n\x1b[0m", u'\x1b[91mTrying other mirror.\n\x1b[0m', u"\x1b[91mhttp://mirror.vanehost.com/centos/7.8.2003/extras/x86_64/Packages/dpdk-18.11.8-1.el7_8.x86_64.rpm: [Errno 12] Timeout on http://mirror.vanehost.com/centos/7.8.2003/extras/x86_64/Packages/dpdk-18.11.8-1.el7_8.x86_64.rpm: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds')\nTrying other mirror.\n\x1b[0m", u"\x1b[91mhttp://mirror.vanehost.com/centos/7.8.2003/os/x86_64/Packages/groff-base-1.22.2-8.el7.x86_64.rpm: [Errno 12] Timeout on http://mirror.vanehost.com/centos/7.8.2003/os/x86_64/Packages/groff-base-1.22.2-8.el7.x86_64.rpm: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds')\n\x1b[0m", u'\x1b[91mTrying other mirror.\n\x1b[0m', u'\x1b[91mhttp://mirror.dhakacom.com/centos/7.8.2003/os/x86_64/Packages/sysvinit-tools-2.88-14.dsf.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirror.dhakacom.com; Unknown error"\n\x1b[0m', u'\x1b[91mTrying other mirror.\n\x1b[0m', u"\x1b[91mhttp://mirror.vanehost.com/centos/7.8.2003/os/x86_64/Packages/GeoIP-1.5.0-14.el7.x86_64.rpm: [Errno 12] Timeout on http://mirror.vanehost.com/centos/7.8.2003/os/x86_64/Packages/GeoIP-1.5.0-14.el7.x86_64.rpm: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds')\nTrying other mirror.\n\x1b[0m", u'--------------------------------------------------------------------------------\n', u'Total 573 kB/s | 78 MB 02:20 \n', u'Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7\n', u'\x1b[91mImporting GPG key 0xF4A80EB5:\n Userid : "CentOS-7 Key (CentOS 7 Official Signing Key) [email protected]"\n Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5\n Package : centos-release-7-8.2003.0.el7.centos.x86_64 (@CentOS)\n From : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7\n\x1b[0m', u'Running transaction check\n', u'Running transaction test\n', u'Transaction test succeeded\n', u'Running transaction\n', u' Installing : mpfr-3.1.1-4.el7.x86_64 1/113', u' \n Installing : libmnl-1.0.3-7.el7.x86_64 2/113', u' \n Installing : libmpc-1.0.1-3.el7.x86_64 3/113', u' \n Installing : libevent-2.0.21-4.el7.x86_64 4/113', u' \n Installing : GeoIP-1.5.0-14.el7.x86_64 5/113', u' \n Installing : 32:bind-libs-lite-9.11.4-16.P2.el7_8.6.x86_64 6/113', u' \n Installing : numactl-libs-2.0.12-5.el7.x86_64 7/113', u' \n Installing : libnfnetlink-1.0.1-4.el7.x86_64 8/113', u' \n Installing : python-srpm-macros-3-32.el7.noarch 9/113', u' \n Installing : 14:libpcap-1.5.3-12.el7.x86_64 10/113', u' \n Installing : python-rpm-macros-3-32.el7.noarch 11/113', u' \n Installing : libnetfilter_conntrack-1.0.6-1.el7_3.x86_64 12/113', u' \n Installing : iptables-1.4.21-34.el7.x86_64 13/113', u' \n Installing : iproute-4.11.0-25.el7_7.2.x86_64 14/113', u' \n Installing : dpdk-18.11.8-1.el7_8.x86_64 15/113', u' \n Installing : 32:bind-libs-9.11.4-16.P2.el7_8.6.x86_64 16/113', u' \n Installing : libevent-devel-2.0.21-4.el7.x86_64 17/113', u' \n Installing : cpp-4.8.5-39.el7.x86_64 18/113', u' \n Installing : zip-3.0-11.el7.x86_64 19/113', u' \n Installing : bzip2-1.0.6-13.el7.x86_64 20/113', u' \n Installing : gdb-7.6.1-119.el7.x86_64 21/113', u' \n Installing : elfutils-0.176-4.el7.x86_64 22/113', u' \n Installing : libstdc++-devel-4.8.5-39.el7.x86_64 23/113', u' \n Installing : libcom_err-devel-1.42.9-17.el7.x86_64 24/113', u' \n Installing : pciutils-libs-3.5.1-3.el7.x86_64 25/113', u' \n Installing : dwz-0.11-3.el7.x86_64 26/113', u' \n Installing : libnl3-3.2.28-4.el7.x86_64 27/113', u' \n Installing : libyaml-0.1.4-11.el7_0.x86_64 28/113', u' \n Installing : hwdata-0.252-9.5.el7.x86_64 29/113', u' \n Installing : pciutils-3.5.1-3.el7.x86_64 30/113', u' \n Installing : patch-2.7.1-12.el7_7.x86_64 31/113', u' \n Installing : libgomp-4.8.5-39.el7.x86_64 32/113', u' \n Installing : pcre-devel-8.32-17.el7.x86_64 33/113', u' \n Installing : m4-1.4.16-10.el7.x86_64 34/113', u' \n Installing : tcp_wrappers-libs-7.6-77.el7.x86_64 35/113', u' \n Installing : perl-srpm-macros-1-8.el7.noarch 36/113', u' \n Installing : file-5.11-36.el7.x86_64 37/113', u' \n Installing : kernel-headers-3.10.0-1127.19.1.el7.x86_64 38/113', u' \n Installing : glibc-headers-2.17-307.el7.1.x86_64 39/113', u' \n Installing : glibc-devel-2.17-307.el7.1.x86_64 40/113', u' \n Installing : gcc-4.8.5-39.el7.x86_64 41/113', u' \n Installing : 1:make-3.82-24.el7.x86_64 42/113', u' \n Installing : 1:openssl-1.0.2k-19.el7.x86_64 43/113', u' \n Installing : unbound-libs-1.6.6-5.el7_8.x86_64 44/113', u' \n Installing : libkadm5-1.15.1-46.el7.x86_64 45/113', u' \n Installing : keyutils-libs-devel-1.5.8-3.el7.x86_64 46/113', u' \n Installing : libsepol-devel-2.5-10.el7.x86_64 47/113', u' \n Installing : libselinux-devel-2.5-15.el7.x86_64 48/113', u' \n Installing : unzip-6.0-21.el7.x86_64 49/113', u' \n Installing : python2-rpm-macros-3-32.el7.noarch 50/113', u' \n Installing : python-devel-2.7.5-88.el7.x86_64 51/113', u' \n Installing : groff-base-1.22.2-8.el7.x86_64 52/113', u' \n Installing : 1:perl-parent-0.225-244.el7.noarch 53/113', u' \n Installing : perl-HTTP-Tiny-0.033-3.el7.noarch 54/113', u' \n Installing : perl-podlators-2.5.1-3.el7.noarch 55/113', u' \n Installing : perl-Pod-Perldoc-3.20-4.el7.noarch 56/113', u' \n Installing : 1:perl-Pod-Escapes-1.04-295.el7.noarch 57/113', u' \n Installing : perl-Text-ParseWords-3.29-4.el7.noarch 58/113', u' \n Installing : perl-Encode-2.51-7.el7.x86_64 59/113', u' \n Installing : perl-Pod-Usage-1.63-3.el7.noarch 60/113', u' \n Installing : 4:perl-libs-5.16.3-295.el7.x86_64 61/113', u' \n Installing : 4:perl-macros-5.16.3-295.el7.x86_64 62/113', u' \n Installing : 4:perl-Time-HiRes-1.9725-3.el7.x86_64 63/113', u' \n Installing : perl-Storable-2.45-3.el7.x86_64 64/113', u' \n Installing : perl-threads-1.87-4.el7.x86_64 65/113', u' \n Installing : perl-Carp-1.26-244.el7.noarch 66/113', u' \n Installing : perl-Filter-1.49-3.el7.x86_64 67/113', u' \n Installing : perl-Exporter-5.68-3.el7.noarch 68/113', u' \n Installing : perl-constant-1.27-2.el7.noarch 69/113', u' \n Installing : perl-Socket-2.010-5.el7.x86_64 70/113', u' \n Installing : perl-Time-Local-1.2300-2.el7.noarch 71/113', u' \n Installing : perl-threads-shared-1.43-6.el7.x86_64 72/113', u' \n Installing : perl-File-Temp-0.23.01-3.el7.noarch 73/113', u' \n Installing : perl-File-Path-2.09-2.el7.noarch 74/113', u' \n Installing : perl-PathTools-3.40-5.el7.x86_64 75/113', u' \n Installing : perl-Scalar-List-Utils-1.27-248.el7.x86_64 76/113', u' \n Installing : 1:perl-Pod-Simple-3.28-4.el7.noarch 77/113', u' \n Installing : perl-Getopt-Long-2.40-3.el7.noarch 78/113', u' \n Installing : 4:perl-5.16.3-295.el7.x86_64 79/113', u' \n Installing : perl-Thread-Queue-3.02-2.el7.noarch 80/113', u' \n Installing : perl-Data-Dumper-2.145-3.el7.x86_64 81/113', u' \n Installing : autoconf-2.69-11.el7.noarch 82/113', u' \ninstall-info: No such file or directory for /usr/share/info/autoconf.info\n', u' Installing : perl-Test-Harness-3.28-3.el7.noarch 83/113', u' \n Installing : automake-1.13.4-3.el7.noarch 84/113', u' \ninstall-info: No such file or directory for /usr/share/info/automake.info.gz\n', u' Installing : redhat-rpm-config-9.1.0-88.el7.centos.noarch 85/113', u' \n Installing : zlib-devel-1.2.7-18.el7.x86_64 86/113', u' \n Installing : libverto-devel-0.2.5-4.el7.x86_64 87/113', u' \n Installing : krb5-devel-1.15.1-46.el7.x86_64 88/113', u' \n Installing : 1:openssl-devel-1.0.2k-19.el7.x86_64 89/113', u' \n Installing : sysvinit-tools-2.88-14.dsf.el7.x86_64 90/113', u' \n Installing : initscripts-9.49.49-1.el7.x86_64 91/113', u' \n Installing : rdma-core-22.4-4.el7_8.x86_64 92/113', u' \n Installing : libibverbs-22.4-4.el7_8.x86_64 93/113', u' \n Installing : libibumad-22.4-4.el7_8.x86_64 94/113', u' \n Installing : ibacm-22.4-4.el7_8.x86_64 95/113', u' \n Installing : librdmacm-22.4-4.el7_8.x86_64 96/113', u' \n Installing : rdma-core-devel-22.4-4.el7_8.x86_64 97/113', u' \n Installing : libibverbs-utils-22.4-4.el7_8.x86_64 98/113', u' \n Installing : unbound-devel-1.6.6-5.el7_8.x86_64 99/113', u' \n Installing : rpm-build-4.11.3-43.el7.x86_64 100/113', u' \n Installing : libtool-2.4.2-22.el7_3.x86_64 101/113', u' \ninstall-info: No such file or directory for /usr/share/info/libtool.info.gz\n', u' Installing : unbound-1.6.6-5.el7_8.x86_64 102/113', u' \n Installing : gcc-c++-4.8.5-39.el7.x86_64 103/113', u' \n Installing : socat-1.7.3.2-2.el7.x86_64 104/113', u' \n Installing : PyYAML-3.10-11.el7.x86_64 105/113', u' \n Installing : 32:bind-utils-9.11.4-16.P2.el7_8.6.x86_64 106/113', u' \n Installing : dpdk-devel-18.11.8-1.el7_8.x86_64 107/113', u' \n Installing : 14:libpcap-devel-1.5.3-12.el7.x86_64 108/113', u' \n Installing : 2:nmap-ncat-6.40-19.el7.x86_64 109/113', u' \n Installing : numactl-devel-2.0.12-5.el7.x86_64 110/113', u' \n Installing : libmnl-devel-1.0.3-7.el7.x86_64 111/113', u' \n Installing : strace-4.24-4.el7.x86_64 112/113', u' \n Installing : firewalld-filesystem-0.6.3-8.el7_8.1.noarch 113/113', u' \n Verifying : perl-HTTP-Tiny-0.033-3.el7.noarch 1/113', u' \n Verifying : libibverbs-22.4-4.el7_8.x86_64 2/113', u' \n Verifying : 14:libpcap-1.5.3-12.el7.x86_64 3/113', u' \n Verifying : sysvinit-tools-2.88-14.dsf.el7.x86_64 4/113', u' \n Verifying : 14:libpcap-devel-1.5.3-12.el7.x86_64 5/113', u' \n Verifying : 32:bind-libs-9.11.4-16.P2.el7_8.6.x86_64 6/113', u' \n Verifying : libverto-devel-0.2.5-4.el7.x86_64 7/113', u' \n Verifying : zlib-devel-1.2.7-18.el7.x86_64 8/113', u' \n Verifying : 4:perl-libs-5.16.3-295.el7.x86_64 9/113', u' \n Verifying : groff-base-1.22.2-8.el7.x86_64 10/113', u' \n Verifying : perl-File-Temp-0.23.01-3.el7.noarch 11/113', u' \n Verifying : python-srpm-macros-3-32.el7.noarch 12/113', u' \n Verifying : python2-rpm-macros-3-32.el7.noarch 13/113', u' \n Verifying : 32:bind-utils-9.11.4-16.P2.el7_8.6.x86_64 14/113', u' \n Verifying : 4:perl-macros-5.16.3-295.el7.x86_64 15/113', u' \n Verifying : krb5-devel-1.15.1-46.el7.x86_64 16/113', u' \n Verifying : unzip-6.0-21.el7.x86_64 17/113', u' \n Verifying : libmnl-devel-1.0.3-7.el7.x86_64 18/113', u' \n Verifying : perl-Data-Dumper-2.145-3.el7.x86_64 19/113', u' \n Verifying : numactl-devel-2.0.12-5.el7.x86_64 20/113', u' \n Verifying : libsepol-devel-2.5-10.el7.x86_64 21/113', u' \n Verifying : libnfnetlink-1.0.1-4.el7.x86_64 22/113', u' \n Verifying : glibc-headers-2.17-307.el7.1.x86_64 23/113', u' \n Verifying : autoconf-2.69-11.el7.noarch 24/113', u' \n Verifying : perl-File-Path-2.09-2.el7.noarch 25/113', u' \n Verifying : perl-Text-ParseWords-3.29-4.el7.noarch 26/113', u' \n Verifying : PyYAML-3.10-11.el7.x86_64 27/113', u' \n Verifying : python-devel-2.7.5-88.el7.x86_64 28/113', u' \n Verifying : pciutils-3.5.1-3.el7.x86_64 29/113', u' \n Verifying : socat-1.7.3.2-2.el7.x86_64 30/113', u' \n Verifying : keyutils-libs-devel-1.5.8-3.el7.x86_64 31/113', u' \n Verifying : 4:perl-Time-HiRes-1.9725-3.el7.x86_64 32/113', u' \n Verifying : 1:perl-Pod-Escapes-1.04-295.el7.noarch 33/113', u' \n Verifying : automake-1.13.4-3.el7.noarch 34/113', u' \n Verifying : GeoIP-1.5.0-14.el7.x86_64 35/113', u' \n Verifying : iproute-4.11.0-25.el7_7.2.x86_64 36/113', u' \n Verifying : gcc-4.8.5-39.el7.x86_64 37/113', u' \n Verifying : rdma-core-devel-22.4-4.el7_8.x86_64 38/113', u' \n Verifying : 32:bind-libs-lite-9.11.4-16.P2.el7_8.6.x86_64 39/113', u' \n Verifying : libkadm5-1.15.1-46.el7.x86_64 40/113', u' \n Verifying : 1:make-3.82-24.el7.x86_64 41/113', u' \n Verifying : initscripts-9.49.49-1.el7.x86_64 42/113', u' \n Verifying : python-rpm-macros-3-32.el7.noarch 43/113', u' \n Verifying : kernel-headers-3.10.0-1127.19.1.el7.x86_64 44/113', u' \n Verifying : 4:perl-5.16.3-295.el7.x86_64 45/113', u' \n Verifying : gcc-c++-4.8.5-39.el7.x86_64 46/113', u' \n Verifying : 1:openssl-1.0.2k-19.el7.x86_64 47/113', u' \n Verifying : libibumad-22.4-4.el7_8.x86_64 48/113', u' \n Verifying : libevent-devel-2.0.21-4.el7.x86_64 49/113', u' \n Verifying : libmpc-1.0.1-3.el7.x86_64 50/113', u' \n Verifying : perl-Pod-Usage-1.63-3.el7.noarch 51/113', u' \n Verifying : perl-Encode-2.51-7.el7.x86_64 52/113', u' \n Verifying : file-5.11-36.el7.x86_64 53/113', u' \n Verifying : unbound-libs-1.6.6-5.el7_8.x86_64 54/113', u' \n Verifying : firewalld-filesystem-0.6.3-8.el7_8.1.noarch 55/113', u' \n Verifying : perl-Getopt-Long-2.40-3.el7.noarch 56/113', u' \n Verifying : cpp-4.8.5-39.el7.x86_64 57/113', u' \n Verifying : perl-srpm-macros-1-8.el7.noarch 58/113', u' \n Verifying : libselinux-devel-2.5-15.el7.x86_64 59/113', u' \n Verifying : libmnl-1.0.3-7.el7.x86_64 60/113', u' \n Verifying : perl-threads-shared-1.43-6.el7.x86_64 61/113', u' \n Verifying : perl-Storable-2.45-3.el7.x86_64 62/113', u' \n Verifying : tcp_wrappers-libs-7.6-77.el7.x86_64 63/113', u' \n Verifying : m4-1.4.16-10.el7.x86_64 64/113', u' \n Verifying : 1:perl-parent-0.225-244.el7.noarch 65/113', u' \n Verifying : iptables-1.4.21-34.el7.x86_64 66/113', u' \n Verifying : perl-Test-Harness-3.28-3.el7.noarch 67/113', u' \n Verifying : pcre-devel-8.32-17.el7.x86_64 68/113', u' \n Verifying : libgomp-4.8.5-39.el7.x86_64 69/113', u' \n Verifying : dpdk-18.11.8-1.el7_8.x86_64 70/113', u' \n Verifying : libtool-2.4.2-22.el7_3.x86_64 71/113', u' \n Verifying : patch-2.7.1-12.el7_7.x86_64 72/113', u' \n Verifying : perl-Carp-1.26-244.el7.noarch 73/113', u' \n Verifying : hwdata-0.252-9.5.el7.x86_64 74/113', u' \n Verifying : libevent-2.0.21-4.el7.x86_64 75/113', u' \n Verifying : 1:openssl-devel-1.0.2k-19.el7.x86_64 76/113', u' \n Verifying : strace-4.24-4.el7.x86_64 77/113', u' \n Verifying : libyaml-0.1.4-11.el7_0.x86_64 78/113', u' \n Verifying : perl-podlators-2.5.1-3.el7.noarch 79/113', u' \n Verifying : mpfr-3.1.1-4.el7.x86_64 80/113', u' \n Verifying : libnl3-3.2.28-4.el7.x86_64 81/113', u' \n Verifying : perl-Filter-1.49-3.el7.x86_64 82/113', u' \n Verifying : dwz-0.11-3.el7.x86_64 83/113', u' \n Verifying : unbound-devel-1.6.6-5.el7_8.x86_64 84/113', u' \n Verifying : rpm-build-4.11.3-43.el7.x86_64 85/113', u' \n Verifying : perl-threads-1.87-4.el7.x86_64 86/113', u' \n Verifying : perl-Exporter-5.68-3.el7.noarch 87/113', u' \n Verifying : perl-constant-1.27-2.el7.noarch 88/113', u' \n Verifying : perl-PathTools-3.40-5.el7.x86_64 89/113', u' \n Verifying : libnetfilter_conntrack-1.0.6-1.el7_3.x86_64 90/113', u' \n Verifying : ibacm-22.4-4.el7_8.x86_64 91/113', u' \n Verifying : pciutils-libs-3.5.1-3.el7.x86_64 92/113', u' \n Verifying : perl-Socket-2.010-5.el7.x86_64 93/113', u' \n Verifying : perl-Thread-Queue-3.02-2.el7.noarch 94/113', u' \n Verifying : 1:perl-Pod-Simple-3.28-4.el7.noarch 95/113', u' \n Verifying : libibverbs-utils-22.4-4.el7_8.x86_64 96/113', u' \n Verifying : perl-Time-Local-1.2300-2.el7.noarch 97/113', u' \n Verifying : libcom_err-devel-1.42.9-17.el7.x86_64 98/113', u' \n Verifying : perl-Pod-Perldoc-3.20-4.el7.noarch 99/113', u' \n Verifying : libstdc++-devel-4.8.5-39.el7.x86_64 100/113', u' \n Verifying : unbound-1.6.6-5.el7_8.x86_64 101/113', u' \n Verifying : rdma-core-22.4-4.el7_8.x86_64 102/113', u' \n Verifying : dpdk-devel-18.11.8-1.el7_8.x86_64 103/113', u' \n Verifying : perl-Scalar-List-Utils-1.27-248.el7.x86_64 104/113', u' \n Verifying : elfutils-0.176-4.el7.x86_64 105/113', u' \n Verifying : gdb-7.6.1-119.el7.x86_64 106/113', u' \n Verifying : numactl-libs-2.0.12-5.el7.x86_64 107/113', u' \n Verifying : glibc-devel-2.17-307.el7.1.x86_64 108/113', u' \n Verifying : bzip2-1.0.6-13.el7.x86_64 109/113', u' \n Verifying : zip-3.0-11.el7.x86_64 110/113', u' \n Verifying : 2:nmap-ncat-6.40-19.el7.x86_64 111/113', u' \n Verifying : librdmacm-22.4-4.el7_8.x86_64 112/113', u' \n Verifying : redhat-rpm-config-9.1.0-88.el7.centos.noarch 113/113', u' \n\nInstalled:\n PyYAML.x86_64 0:3.10-11.el7 \n autoconf.noarch 0:2.69-11.el7 \n automake.noarch 0:1.13.4-3.el7 \n bind-utils.x86_64 32:9.11.4-16.P2.el7_8.6 \n dpdk-devel.x86_64 0:18.11.8-1.el7_8 \n firewalld-filesystem.noarch 0:0.6.3-8.el7_8.1 \n gcc.x86_64 0:4.8.5-39.el7 \n gcc-c++.x86_64 0:4.8.5-39.el7 \n iproute.x86_64 0:4.11.0-25.el7_7.2 \n libibumad.x86_64 0:22.4-4.el7_8 \n libibverbs.x86_64 0:22.4-4.el7_8 \n libibverbs-utils.x86_64 0:22.4-4.el7_8 \n libmnl-devel.x86_64 0:1.0.3-7.el7 \n libpcap.x86_64 14:1.5.3-12.el7 \n libpcap-devel.x86_64 14:1.5.3-12.el7 \n libtool.x86_64 0:2.4.2-22.el7_3 \n make.x86_64 1:3.82-24.el7 \n nmap-ncat.x86_64 2:6.40-19.el7 \n numactl-devel.x86_64 0:2.0.12-5.el7 \n numactl-libs.x86_64 0:2.0.12-5.el7 \n openssl.x86_64 1:1.0.2k-19.el7 \n rdma-core-devel.x86_64 0:22.4-4.el7_8 \n rpm-build.x86_64 0:4.11.3-43.el7 \n socat.x86_64 0:1.7.3.2-2.el7 \n strace.x86_64 0:4.24-4.el7 \n unbound.x86_64 0:1.6.6-5.el7_8 \n unbound-devel.x86_64 0:1.6.6-5.el7_8 \n\nDependency Installed:\n GeoIP.x86_64 0:1.5.0-14.el7 \n bind-libs.x86_64 32:9.11.4-16.P2.el7_8.6 \n bind-libs-lite.x86_64 32:9.11.4-16.P2.el7_8.6 \n bzip2.x86_64 0:1.0.6-13.el7 \n cpp.x86_64 0:4.8.5-39.el7 \n dpdk.x86_64 0:18.11.8-1.el7_8 \n dwz.x86_64 0:0.11-3.el7 \n elfutils.x86_64 0:0.176-4.el7 \n file.x86_64 0:5.11-36.el7 \n gdb.x86_64 0:7.6.1-119.el7 \n glibc-devel.x86_64 0:2.17-307.el7.1 \n glibc-headers.x86_64 0:2.17-307.el7.1 \n groff-base.x86_64 0:1.22.2-8.el7 \n hwdata.x86_64 0:0.252-9.5.el7 \n ibacm.x86_64 0:22.4-4.el7_8 \n initscripts.x86_64 0:9.49.49-1.el7 \n iptables.x86_64 0:1.4.21-34.el7 \n kernel-headers.x86_64 0:3.10.0-1127.19.1.el7 \n keyutils-libs-devel.x86_64 0:1.5.8-3.el7 \n krb5-devel.x86_64 0:1.15.1-46.el7 \n libcom_err-devel.x86_64 0:1.42.9-17.el7 \n libevent.x86_64 0:2.0.21-4.el7 \n libevent-devel.x86_64 0:2.0.21-4.el7 \n libgomp.x86_64 0:4.8.5-39.el7 \n libkadm5.x86_64 0:1.15.1-46.el7 \n libmnl.x86_64 0:1.0.3-7.el7 \n libmpc.x86_64 0:1.0.1-3.el7 \n libnetfilter_conntrack.x86_64 0:1.0.6-1.el7_3 \n libnfnetlink.x86_64 0:1.0.1-4.el7 \n libnl3.x86_64 0:3.2.28-4.el7 \n librdmacm.x86_64 0:22.4-4.el7_8 \n libselinux-devel.x86_64 0:2.5-15.el7 \n libsepol-devel.x86_64 0:2.5-10.el7 \n libstdc++-devel.x86_64 0:4.8.5-39.el7 \n libverto-devel.x86_64 0:0.2.5-4.el7 \n libyaml.x86_64 0:0.1.4-11.el7_0 \n m4.x86_64 0:1.4.16-10.el7 \n mpfr.x86_64 0:3.1.1-4.el7 \n openssl-devel.x86_64 1:1.0.2k-19.el7 \n patch.x86_64 0:2.7.1-12.el7_7 \n pciutils.x86_64 0:3.5.1-3.el7 \n pciutils-libs.x86_64 0:3.5.1-3.el7 \n pcre-devel.x86_64 0:8.32-17.el7 \n perl.x86_64 4:5.16.3-295.el7 \n perl-Carp.noarch 0:1.26-244.el7 \n perl-Data-Dumper.x86_64 0:2.145-3.el7 \n perl-Encode.x86_64 0:2.51-7.el7 \n perl-Exporter.noarch 0:5.68-3.el7 \n perl-File-Path.noarch 0:2.09-2.el7 \n perl-File-Temp.noarch 0:0.23.01-3.el7 \n perl-Filter.x86_64 0:1.49-3.el7 \n perl-Getopt-Long.noarch 0:2.40-3.el7 \n perl-HTTP-Tiny.noarch 0:0.033-3.el7 \n perl-PathTools.x86_64 0:3.40-5.el7 \n perl-Pod-Escapes.noarch 1:1.04-295.el7 \n perl-Pod-Perldoc.noarch 0:3.20-4.el7 \n perl-Pod-Simple.noarch 1:3.28-4.el7 \n perl-Pod-Usage.noarch 0:1.63-3.el7 \n perl-Scalar-List-Utils.x86_64 0:1.27-248.el7 \n perl-Socket.x86_64 0:2.010-5.el7 \n perl-Storable.x86_64 0:2.45-3.el7 \n perl-Test-Harness.noarch 0:3.28-3.el7 \n perl-Text-ParseWords.noarch 0:3.29-4.el7 \n perl-Thread-Queue.noarch 0:3.02-2.el7 \n perl-Time-HiRes.x86_64 4:1.9725-3.el7 \n perl-Time-Local.noarch 0:1.2300-2.el7 \n perl-constant.noarch 0:1.27-2.el7 \n perl-libs.x86_64 4:5.16.3-295.el7 \n perl-macros.x86_64 4:5.16.3-295.el7 \n perl-parent.noarch 1:0.225-244.el7 \n perl-podlators.noarch 0:2.5.1-3.el7 \n perl-srpm-macros.noarch 0:1-8.el7 \n perl-threads.x86_64 0:1.87-4.el7 \n perl-threads-shared.x86_64 0:1.43-6.el7 \n python-devel.x86_64 0:2.7.5-88.el7 \n python-rpm-macros.noarch 0:3-32.el7 \n python-srpm-macros.noarch 0:3-32.el7 \n python2-rpm-macros.noarch 0:3-32.el7 \n rdma-core.x86_64 0:22.4-4.el7_8 \n redhat-rpm-config.noarch 0:9.1.0-88.el7.centos \n sysvinit-tools.x86_64 0:2.88-14.dsf.el7 \n tcp_wrappers-libs.x86_64 0:7.6-77.el7 \n unbound-libs.x86_64 0:1.6.6-5.el7_8 \n unzip.x86_64 0:6.0-21.el7 \n zip.x86_64 0:3.0-11.el7 \n zlib-devel.x86_64 0:1.2.7-18.el7 \n\n', u'Complete!\n', u'Removing intermediate container 3b40d57cf6f2\n', u' ---> 43dd91a8fe63\n', u'Step 4/18 : ENV OVS_VERSION=2.12.0', u'\n', u' ---> Running in dad5ff08a408\n', u'Removing intermediate container dad5ff08a408\n', u' ---> 77ee3b335675\n', u'Step 5/18 : ENV OVS_SUBVERSION=5', u'\n', u' ---> Running in a8c006b0a372\n', u'Removing intermediate container a8c006b0a372\n', u' ---> 33b4ed89d8fa\n', u'Step 6/18 : ENV DPDK_VERSION=18.11.6', u'\n', u' ---> Running in 74b2becec6bd\n', u'Removing intermediate container 74b2becec6bd\n', u' ---> 8f82113d72f4\n', u'Step 7/18 : ENV DPDK_DIR=/opt/dpdk-$DPDK_VERSION', u'\n', u' ---> Running in 3bdb2d9fef6f\n', u'Removing intermediate container 3bdb2d9fef6f\n', u' ---> a10e2408c332\n', u'Step 8/18 : ENV DPDK_TARGET=x86_64-native-linuxapp-gcc', u'\n', u' ---> Running in dd7c3b902d68\n', u'Removing intermediate container dd7c3b902d68\n', u' ---> 29db1f836555\n', u'Step 9/18 : ENV DPDK_BUILD=$DPDK_DIR/$DPDK_TARGET', u'\n', u' ---> Running in eaf336d21cb7\n', u'Removing intermediate container eaf336d21cb7\n', u' ---> c349c1c69729\n', u'Step 10/18 : COPY . $DPDK_DIR', u'\n', u' ---> 853e34067c33\n', u'Step 11/18 : RUN cd ~ && curl -OL https://github.com/alauda/ovs/archive/$OVS_VERSION-$OVS_SUBVERSION.tar.gz && tar xf $OVS_VERSION-$OVS_SUBVERSION.tar.gz && rm -f $OVS_VERSION-$OVS_SUBVERSION.tar.gz && cd ovs-$OVS_VERSION-$OVS_SUBVERSION && sed -e 's/@Version@/0.0.1/' rhel/openvswitch-fedora.spec.in > /tmp/tmp_ovs.spec && yum-builddep -y /tmp/tmp_ovs.spec && ./boot.sh && ./configure --prefix=/usr/ --with-dpdk=$DPDK_BUILD && make -j$(nproc) && make rpm-fedora RPMBUILD_OPT="--with dpdk --without check" && make install', u'\n', u' ---> Running in 3db6e1039289\n', u'\x1b[91m % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\x1b[0m', u'\x1b[91m\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- \x1b[0m', u'\x1b[91m 0\x1b[0m', u'\x1b[91m\r100 120 100 120 0 0 144 0 --:--:-- --:--:-- --:--:-- 14\x1b[0m', u'\x1b[91m4\n\x1b[0m', u'\x1b[91m\r100 14 100 14 0 0 10 0 0:00:01 0:00:01 --:--:-- 10\n\x1b[0m', u'\x1b[91mtar: This does not look like a tar archive\n\x1b[0m', u'\x1b[91m\ngzip: stdin: not in gzip format\n\x1b[0m', u'\x1b[91mtar: Child returned status 1\ntar: Error is not recoverable: exiting now\n\x1b[0m', u'Removing intermediate container 3db6e1039289\n']

Build issue with OpenNESS 20.06 when using kube-ovn

https://github.com/alauda/ovs project recently removed all the previous RPMs (Changes and releases). This is impacting the 20.06 OEK when we select kube-ovn as the CNI. We are working with alauda and have been informed that it was accidentally deleted. 4 out of 6 RPMs have been restored. The below once are still missing. We are working on a fix with the team.

https://github.com/alauda/ovs/releases/download/2.12.0-5/openvswitch-2.12.0-5.el7.x86_64.rpm
https://github.com/alauda/ovs/releases/download/2.12.0-5/openvswitch-devel-2.12.0-5.el7.x86_64.rpm

Please follow the progress here - #48

Error While deploying openness 20.03

HI Team,

Am trying to deploy openness 20.03 on cnetos VM's. But while running controller deployment script got below error.
vm_3

Can any one please help how to proceed on this error?
Thanks & Regards,
Devika

downlaoding of packages on edgenode failed

While depoying edgenode in On-premise mode, I am getting the following error:

task path: /root/openness-experience-kits/roles/machine_setup/configure_tuned/tasks/configure_tuned.yml:6
fatal: [edgenode]: FAILED! => {
"changed": false
}

MSG:

Failure downloading http://linuxsoft.cern.ch/scientific/7x/x86_64/updates/fastbugs/tuned-profiles-realtime-2.11.0-5.el7_7.1.noarch.rpm, HTTP Error 404: Not Found

The url "http://linuxsoft.cern.ch/scientific/7x/x86_64/updates/fastbugs/tuned-profiles-realtime-2.11.0-5.el7_7.1.noarch.rpm" is found to be not reachable.

Can you please provde some way of fixing this issue , or provide alternatine url for the same.

Error in Downloading OVN tools Task

Looks like following links are not available to download OVN and OpenvSwitch,
https://github.com/alauda/ovs/releases/download/2.12.0-5/openvswitch-2.12.0-5.el7.x86_64.rpm
https://github.com/alauda/ovs/releases/download/2.12.0-5/ovn-2.12.0-5.el7.x86_64.rpm

Here is the error log from the OpenNESS Experience Kit,

TASK [kubernetes/cni/kubeovn/master : download OVN tools] **************************************************************************************************
task path: /root/netowrking_export28.08/esb_modules/OpenNESS/openness-experience-kits/roles/kubernetes/cni/kubeovn/master/tasks/main.yml:26
FAILED - RETRYING: download OVN tools (10 retries left).
FAILED - RETRYING: download OVN tools (9 retries left).
FAILED - RETRYING: download OVN tools (8 retries left).
FAILED - RETRYING: download OVN tools (7 retries left).
FAILED - RETRYING: download OVN tools (6 retries left).
FAILED - RETRYING: download OVN tools (5 retries left).
FAILED - RETRYING: download OVN tools (4 retries left).
FAILED - RETRYING: download OVN tools (3 retries left).
FAILED - RETRYING: download OVN tools (2 retries left).
FAILED - RETRYING: download OVN tools (1 retries left).
fatal: [controller]: FAILED! => {
"attempts": 10,
"changed": true,
"cmd": [
"yum",
"install",
"--downloadonly",
"-y",
"https://github.com/alauda/ovs/releases/download/2.12.0-5/openvswitch-2.12.0-5.el7.x86_64.rpm",
"https://github.com/alauda/ovs/releases/download/2.12.0-5/ovn-2.12.0-5.el7.x86_64.rpm"
],
"delta": "0:00:00.493374",
"end": "2020-08-31 07:08:27.686831",
"rc": 1,
"start": "2020-08-31 07:08:27.193457"
}

STDERR:

Cannot open: https://github.com/alauda/ovs/releases/download/2.12.0-5/openvswitch-2.12.0-5.el7.x86_64.rpm. Skipping.
Cannot open: https://github.com/alauda/ovs/releases/download/2.12.0-5/ovn-2.12.0-5.el7.x86_64.rpm. Skipping.
Error: Nothing to do

Error msg in deploying controller with Ansible script

I have come across this error a couple of times. Tried different things looking at various use forums, not sure why this occuring.

go: modernc.org/[email protected]: git fetch -f https://gitlab.com/cznic/strutil refs/heads/:refs/heads/ refs/tags/:refs/tags/ in /root/go/pkg/mod/cache/vcs/f48599000415ab70c2f95dc7528c585820ed37ee15d27040a550487e83a41748: exit status 128:
error: RPC failed; result=22, HTTP code = 404
fatal: The remote end hung up unexpectedly

I have a VM which has openness experience kit installed and on the same VM I am installing OpenNESS controller.

Error when deploying edge node

Got the following when running the playbook (./deploye_ne.sh single):

go: modernc.org/[email protected]: git fetch -f origin refs/heads/:refs/heads/ refs/tags/:refs/tags/ in /root/go/pkg/mod/cache/vcs/f48599000415ab70c2f95dc7528c585820ed37ee15d27040a550487e83a41748: exit status 128:
error: RPC failed; result=22, HTTP code = 404
fatal: The remote end hung up unexpectedly

It seems to be related to a bug in Gitlab based on this link: https://gitlab.com/gitlab-org/gitlab-foss/-/issues/63512

CrashLoopBackOff error for openvino-cons-app pod

Hi,
I am trying to run the openvino application. I could run the producer pod but the consumer pod fails. Output of kubectl describe command is shown below. While, I am debugging this issue, please let us know if this is a known issue.

[root@controller consumer]# kubectl describe pods openvino-cons-app

Name: openvino-cons-app
Namespace: default
Priority: 0
Node: node01/146.0.237.30
Start Time: Thu, 23 Jul 2020 18:46:51 +0200
Labels: name=openvino-cons-app
Annotations: ovn.kubernetes.io/allocated: true
ovn.kubernetes.io/cidr: 10.16.0.0/16
ovn.kubernetes.io/gateway: 10.16.0.1
ovn.kubernetes.io/ip_address: 10.16.0.9
ovn.kubernetes.io/logical_switch: ovn-default
ovn.kubernetes.io/mac_address: 0e:4f:1d:10:00:0a
Status: Running
IP: 10.16.0.9
IPs:
IP: 10.16.0.9
Containers:
openvino-cons-app:
Container ID: docker://f2b1d86372c70dbb939453176cc84051a9ca005e6ad3d1cc114245219026ff84
Image: openvino-cons-app:1.0
Image ID: docker://sha256:6cf9eefe7638b01a1700d99c3cd0cfe90539cf359dd09f6f8dfe7ad00aafa11a
Port:
Host Port:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 24 Jul 2020 17:28:10 +0200
Finished: Fri, 24 Jul 2020 17:28:11 +0200
Ready: False
Restart Count: 266
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xqj7r (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-xqj7r:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-xqj7r
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Normal Created 27m (x262 over 22h) kubelet, node01 Created container openvino-cons-app
Normal Pulled 22m (x263 over 22h) kubelet, node01 Container image "openvino-cons-app:1.0" already present on machine
Warning BackOff 2m31s (x6259 over 22h) kubelet, node01 Back-off restarting failed container

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.