Git Product home page Git Product logo

multus-dynamic-networks-controller's People

Contributors

0xfelix avatar dependabot[bot] avatar kmabda avatar lioneljouin avatar maiqueb avatar oshoval avatar phoracek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

multus-dynamic-networks-controller's Issues

Add `crio` support

Mount the crio socket into the controller pods, and read the netns of the mutated pod via crio API.

[BUG] Missing network-status annotation

Describe the bug

The k8s.v1.cni.cncf.io/network-status is missing when adding and removing network attachment elements at the same time.

The reconcile loop will add the new network attachment first, then will update the k8s.v1.cni.cncf.io/network-status annotation based on the pod got at the beginning of the reconcile loop. After that, the reconcile will go to delete the non longer existing network attachment and will update the k8s.v1.cni.cncf.io/network-status annotation based on the pod got at the beginning of the reconcile loop (so without the newly added network status).

Expected behavior

To solve the problem, I see 2 solutions:

  1. After each pod network status annotation update, the controller should get the pod again (which will contain the newly added network status).
  2. Multus should handle the annotation by itself instead of leaving the responsibility to the caller.

To Reproduce
Steps to reproduce the behavior:

  1. Have a running pod with a network attachment (so annotation k8s.v1.cni.cncf.io/network + k8s.v1.cni.cncf.io/network-status), let's say interface A.
  2. Delete the element for interface A in k8s.v1.cni.cncf.io/network and add a new different one, let's say interface B.
  3. Check the network status annotation. There will be no status for interface B.

Environment:

  • multus-dynamic-networks-controller version: latest-amd64
  • Kubernetes version (use kubectl version): N/A
  • Network-attachment-definition: N/A
  • OS (e.g. from /etc/os-release): N/A
  • Controller configuration (criSocketPath / multusSocketPath): N/A
  • Kernel (e.g. uname -a): N/A
  • Others: N/A

Additional info / context
/

interface name is now mandatory for dynamic networks attachements

In multus a network interface name in a pod is optional, if the user didn't provide an interface name in the pod's network annotation the latter will be given a net1, net2, etc. as interface name.

To be able to hot-plug/unplug an network interface in a running pod the user must provide an interface name (as it should be) for the network-selection-elements

If a pod get configured with a multus network annotation without providing an interface name, it's not possible to unplug an interface from a running pod unless one add an interface name to the desired network-selection-elements (which would fails since the interface is already plugged) than unplug it (not a good user experience)

see also #48

To Reproduce
Steps to reproduce the behavior:

  1. use the example in the README.md https://github.com/maiqueb/multus-dynamic-networks-controller#adding--removing-network-interfaces
  2. remove the only network annotation element present
  3. the request would be rejected since the selected network element don't have an interface name.

Environment:

  • multus-dynamic-networks-controller version: N/A
  • Kubernetes version (use kubectl version): N/A
  • Network-attachment-definition: N/A
  • OS (e.g. from /etc/os-release): N/A
  • Controller configuration (criSocketPath / multusSocketPath): N/A
  • Kernel (e.g. uname -a): N/A
  • Others: N/A

Additional info / context
Not sure how this should be resolved since we can't change current multus behaviour regarding having an interface name optional, but though this should be tracked some how!

[RFE] refactor the pod controller as level driven

Is your feature request related to a problem? Please describe.
Currently, if the user edits the pod's network-selection-elements and it fails to apply the new configuration,
it will retry up to 2 times (hard-coded), and if those fail, that's pretty much it.

Never the less, the network-selection-elements for the pod are already updated, and follow-up attempts to plug
a new interface into the pod will be confusing, since for all the controller knows, the interface whose hotplug was
previously attempted succeeded.

This is bad.

Describe the solution you'd like
The controller should be level driven: we should compute the differences of the "desired state" - i.e. the network-selection-elements - and the current state - the network status annotations on the pod.

For every retry operation, we should check which interfaces present in the desired state are not reflected in the current state, and attempt to apply those.

Additional context
This feature / RFE consists hardening of the controller - i.e. it makes it more resilient.

It is not a part of the MVP.

[BUG] Failing e2e tests

Describe the bug

1. address already in use

I am not sure, but I believe the macvlan interface (type of cni used in the e2e tests) is not fully deleted and since the mac address used is always the same for each e2e tests, the kernel refuses a new macvlan interface to be created with an already existing mac address.

I would propose to remove the hard-coded mac address and generate a random mac address for each tests.

// https://stackoverflow.com/questions/21018729/generate-mac-address-in-go
func generateMacAddress() (net.HardwareAddr, error) {
	buf := make([]byte, 6)
	_, err := rand.Read(buf)
	if err != nil {
		return nil, err
	}

	buf[0] = (buf[0] | 2) & 0xfe // Set local bit, ensure unicast addres

	return buf, nil
}
2. timed out waiting for the condition

This issue is related to Multus and requires at least 2 tests running.
The pods UID retrieved by Multus when adding a network belongs to a pod that was running in the previous tests. That pod has been deleted in the previous test and a new one with exactly the same name and namespace has been created by the current test.

It could probably solved with different way of handling cache in Multus. Otherwise, having the "watch" permission is also solving the problem (a PR exists in Multus: k8snetworkplumbingwg/multus-cni#1171).

logs:

2023-12-02T22:47:33.740071666Z stderr F 2023-12-02T22:47:33Z [verbose] ADD finished CNI request ContainerID:"8eadc68bc8957fc260db7c320af98a8694136a1f266d0736f8266fee529606ae" Netns:"/var/run/netns/cni-7b2611d8-ada2-07ff-1b08-a6b13b6198fc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=ns1;K8S_POD_NAME=tiny-winy-pod;K8S_POD_INFRA_CONTAINER_ID=8eadc68bc8957fc260db7c320af98a8694136a1f266d0736f8266fee529606ae;K8S_POD_UID=c01880d4-5c38-4cf8-b7b4-a7e8dee315e7" Path:"", result: "", err: error configuring pod [ns1/tiny-winy-pod] networking: Multus: [ns1/tiny-winy-pod/c01880d4-5c38-4cf8-b7b4-a7e8dee315e7]: expected pod UID "c01880d4-5c38-4cf8-b7b4-a7e8dee315e7" but got "f164f508-c820-4741-98d7-f1ebe1f93a1c" from Kube API
3. a provisioned pod whose network selection elements do not feature the interface name

This test creates a pod with a network in which the interface name is not included, this interface is added at pod creation successfully since Multus assign an interface name (net).
When adding a new correct interface in the pod annotation, it should not be reconciled since the first interface should not be handled due to the lack name.

I found a few problems regarding this issue:

(3.1.) Sometimes, the order in which the interfaces are handled is changing, so the interface is still added. The order is random since, to create the attachmentsToAdd slice, the loop is iterating over a map.
https://github.com/k8snetworkplumbingwg/multus-dynamic-networks-controller/blob/v0.3.2/pkg/controller/pod.go#L213

(3.2.) If 2 interfaces has to be added and the first one gets added correctly but not the second, the first one will not be added in the network status, but (the interface) will still be added inside the pod.
This happens because the attachmentResults must always be returned even if an error is returned.
https://github.com/k8snetworkplumbingwg/multus-dynamic-networks-controller/blob/v0.3.2/pkg/controller/pod.go#L337

(3.3.) This statement could not catch anything since there is no wait for reconciliation.
https://github.com/k8snetworkplumbingwg/multus-dynamic-networks-controller/blob/v0.3.2/e2e/e2e_test.go#L297

(3.4.) Is it a valid test knowing we updated the network annotation on the fly?
#63

Expected behavior

/

To Reproduce

  1. kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/e2e/templates/cni-install.yml.j2
  2. kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset-thick.yml
  3. kubectl apply -f manifests/dynamic-networks-controller.yaml
  4. make e2e/test

Environment:

  • multus-dynamic-networks-controller version: 21b6dee (latest)

Additional info / context
/

[BUG] updates to an existing attachment should be rejected

Describe the bug
An update to an existing attachment is currently being accepted, and treated as the following sequence of operations (in this order):

  • add new interface
  • remove old interface

Expected behavior
The controller should reject this operation, logging this.

To Reproduce
Steps to reproduce the behavior:

  1. provision a net-attach-def + pod with network selection elements using it. This net-attach-def should not specify the interface name.
---
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: macvlan1-config
spec: 
  config: '{
            "cniVersion": "0.4.0",
            "plugins": [
                {
                    "type": "macvlan",
                    "capabilities": { "ips": true },
                    "master": "eth1",
                    "mode": "bridge",
                    "ipam": {
                        "type": "static"
                    }
                }, {
                    "type": "tuning"
                } ]
        }'
---
apiVersion: v1
kind: Pod
metadata:
  name: macvlan1-worker1
  annotations:
    k8s.v1.cni.cncf.io/networks: '[
            { "name": "macvlan1-config",
              "ips": [ "10.1.1.11/24" ] }
    ]'
  labels:
    app: macvlan
spec:
  containers:
  - name: macvlan-worker1
    image: centos:8
    command: ["/bin/sleep", "10000"]
    securityContext:
      privileged: true
  1. edit the pod's network-selection-elements trying to define the name of the interface; i.e. turn the networks annotation into:
{
    "name": "macvlan1-config",
    "ips": [ "10.1.1.11/24" ],
    "interface": "net1"
}

Environment:

  • multus-dynamic-networks-controller version: N/A
  • Kubernetes version (use kubectl version): N/A
  • Network-attachment-definition: N/A
  • OS (e.g. from /etc/os-release): N/A
  • Controller configuration (criSocketPath / multusSocketPath): N/A
  • Kernel (e.g. uname -a): N/A
  • Others: N/A

Additional info / context
I wonder if an event should be thrown for this vs only logging the error.

[BUG] cannot hotplug when the "old" pod does not have network-selection-elements

Describe the bug
It currently is impossible to hotplug interfaces into a pod that does not feature any network selection elements - k8s.v1.cni.cncf.io/networks.

Expected behavior
It should be possible to hotplug interfaces into a pod without network-selection-elements.

To Reproduce
Steps to reproduce the behavior:

  1. Provision the following pod:
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: tiny-winy-pod
  name: tiny-winy-pod
spec:
  containers:
  - command:
    - /bin/ash
    - -c
    - 'trap : TERM INT; sleep infinity & wait'
    image: k8s.gcr.io/e2e-test-images/agnhost:2.26
    imagePullPolicy: IfNotPresent
    name: samplepod
  1. Hot-plug an interface into it - add the following annotation:
k8s.v1.cni.cncf.io/networks: '[{"name":"tenant-network","namespace":"ns1","mac":"02:03:04:05:06:07","interface":"ens58"}]'
  1. It'll fail to add the interface, and the network-status will not feature this new attachment.

[BUG] e2e tests failing on kubevirtci providers

Describe the bug
The e2e tests are deterministically failing on kubevirtci providers. This often happens on the tests request the pod interfaces to have a statically set MAC address.

The following error can be seen in the logs:

E0426 07:55:43.773202       1 pod.go:222] error adding attachments: failed to ADD delegate: unexpected CNI response status 400: '&{ContainerID:d700ae73b86d6ffd50053bcc961ae74a6b391cef93d94051902132801c00d5e5 Netns:/var/run/netns/b45beb46-6399-4480-8b0a-66acc5f8fc13 IfName:ens58 Args:K8S_POD_NAMESPACE=ns1;K8S_POD_NAME=tiny-winy-pod;K8S_POD_UID=5217fe2f-e1a8-4907-bd13-5198a70d3720 Path: StdinData:[123 34 99 110 105 86 101 114 115 105 111 110 34 58 34 48 46 51 46 48 34 44 34 100 105 115 97 98 108 101 67 104 101 99 107 34 58 116 114 117 101 44 34 110 97 109 101 34 58 34 116 101 110 97 110 116 45 110 101 116 119 111 114 107 34 44 34 112 108 117 103 105 110 115 34 58 91 123 34 109 97 115 116 101 114 34 58 34 101 116 104 48 34 44 34 109 111 100 101 34 58 34 98 114 105 100 103 101 34 44 34 116 121 112 101 34 58 34 109 97 99 118 108 97 110 34 125 93 44 10 32 32 32 32 34 99 104 114 111 111 116 68 105 114 34 58 32 34 47 104 111 115 116 114 111 111 116 34 44 10 32 32 32 32 34 99 111 110 102 68 105 114 34 58 32 34 47 104 111 115 116 47 101 116 99 47 99 110 105 47 110 101 116 46 100 34 44 10 32 32 32 32 34 108 111 103 76 101 118 101 108 34 58 32 34 118 101 114 98 111 115 101 34 44 10 32 32 32 32 34 115 111 99 107 101 116 68 105 114 34 58 32 34 47 104 111 115 116 47 114 117 110 47 109 117 108 116 117 115 47 34 44 10 32 32 32 32 34 99 110 105 86 101 114 115 105 111 110 34 58 32 34 48 46 51 46 49 34 44 10 32 32 32 32 34 99 110 105 67 111 110 102 105 103 68 105 114 34 58 32 34 47 104 111 115 116 47 101 116 99 47 99 110 105 47 110 101 116 46 100 34 44 10 32 32 32 32 34 109 117 108 116 117 115 67 111 110 102 105 103 70 105 108 101 34 58 32 34 97 117 116 111 34 44 10 32 32 32 32 34 109 117 108 116 117 115 65 117 116 111 99 111 110 102 105 103 68 105 114 34 58 32 34 47 104 111 115 116 47 101 116 99 47 99 110 105 47 110 101 116 46 100 34 10 125 10]} {ContainerID:d700ae73b86d6ffd50053bcc961ae74a6b391cef93d94051902132801c00d5e5 Netns:/var/run/netns/b45beb46-6399-4480-8b0a-66acc5f8fc13 IfName:ens58 Args:K8S_POD_NAMESPACE=ns1;K8S_POD_NAME=tiny-winy-pod;K8S_POD_UID=5217fe2f-e1a8-4907-bd13-5198a70d3720 Path: StdinData:[123 34 99 110 105 86 101 114 115 105 111 110 34 58 34 48 46 51 46 48 34 44 34 100 105 115 97 98 108 101 67 104 101 99 107 34 58 116 114 117 101 44 34 110 97 109 101 34 58 34 116 101 110 97 110 116 45 110 101 116 119 111 114 107 34 44 34 112 108 117 103 105 110 115 34 58 91 123 34 109 97 115 116 101 114 34 58 34 101 116 104 48 34 44 34 109 111 100 101 34 58 34 98 114 105 100 103 101 34 44 34 116 121 112 101 34 58 34 109 97 99 118 108 97 110 34 125 93 44 10 32 32 32 32 34 99 104 114 111 111 116 68 105 114 34 58 32 34 47 104 111 115 116 114 111 111 116 34 44 10 32 32 32 32 34 99 111 110 102 68 105 114 34 58 32 34 47 104 111 115 116 47 101 116 99 47 99 110 105 47 110 101 116 46 100 34 44 10 32 32 32 32 34 108 111 103 76 101 118 101 108 34 58 32 34 118 101 114 98 111 115 101 34 44 10 32 32 32 32 34 115 111 99 107 101 116 68 105 114 34 58 32 34 47 104 111 115 116 47 114 117 110 47 109 117 108 116 117 115 47 34 44 10 32 32 32 32 34 99 110 105 86 101 114 115 105 111 110 34 58 32 34 48 46 51 46 49 34 44 10 32 32 32 32 34 99 110 105 67 111 110 102 105 103 68 105 114 34 58 32 34 47 104 111 115 116 47 101 116 99 47 99 110 105 47 110 101 116 46 100 34 44 10 32 32 32 32 34 109 117 108 116 117 115 67 111 110 102 105 103 70 105 108 101 34 58 32 34 97 117 116 111 34 44 10 32 32 32 32 34 109 117 108 116 117 115 65 117 116 111 99 111 110 102 105 103 68 105 114 34 58 32 34 47 104 111 115 116 47 101 116 99 47 99 110 105 47 110 101 116 46 100 34 10 125 10]} ERRORED: error configuring pod [ns1/tiny-winy-pod] networking: plugin type="macvlan" failed (add): failed to set "ens58" UP: address already in use

Expected behavior
The tests should pass.

To Reproduce
Steps to reproduce the behavior:

  1. deploy the CNAO's project; execute the following command from the project's root:
make cluster-up && make cluster-operator-push && make cluster-operator-install
  1. provision the following CR
  2. run the dynamic-networks-controller e2e tests: make e2e/test

Environment:

  • multus-dynamic-networks-controller version: N/A
  • Kubernetes version (use kubectl version): N/A
  • Network-attachment-definition: N/A
  • OS (e.g. from /etc/os-release): N/A
  • Controller configuration (criSocketPath / multusSocketPath): N/A
  • Kernel (e.g. uname -a): N/A
  • Others: N/A

Additional info / context
Add any other information / context about the problem here.

[RFE] map the container image to the actual git commit

Is your feature request related to a problem? Please describe.
In order to proper debug user issues, we need an easy way to map the container image with the actual code running in it.

Describe the solution you'd like
Embed the git commit SHA into the image's metadata would be the cheapest way to achieve this.

Describe alternatives you've considered
Using LD flags and exposing a --version command would be an alternative, but it has the following drawbacks:

  • requires code changes
  • requires pulling the image
  • requires running the image

[BUG] e2e test "manages to add a new interface to a running pod once the desired state features the interface names"

Describe the bug
This test flakes.

Possibly the time out should be a bit less strict, but the root cause should still be found, and fixed.

Expected behavior
The test to consistently pass.

To Reproduce
Steps to reproduce the behavior:

  1. Run the test

Additional info / context
Failed builds:

[BUG] Can't hotunplug podInterface which is not hotplug and primaryPodInterface(eth0)

Describe the bug
Unable to hot unplug the pod interface that is not hot-plugged and the primary pod interface (eth0) because the add request body contains the pod container ID, while the kubelet uses the sandbox ID.

Expected behavior
Honestly, I think unable to hotunplug podInterface which is not hotplug is acceptable. While I am curious as to why multus-dynamic-networks-controller send add command different form kubelet (using containerid instead of sandbox id?)

To Reproduce
Steps to reproduce the behavior:

  1. create a pod with two interface
  2. hotunplug the secand interface

Environment:

  • multus-dynamic-networks-controller version: N/A
  • Kubernetes version (use kubectl version): N/A
  • Network-attachment-definition: N/A
  • OS (e.g. from /etc/os-release): N/A
  • Controller configuration (criSocketPath / multusSocketPath): N/A
  • Kernel (e.g. uname -a): N/A
  • Others: N/A

Additional info / context
Add any other information / context about the problem here.

[RFE] SCC support

Is your feature request related to a problem? Please describe.
When running the project on OpenShift, deployment would fail due to insufficient rights. That is caused by the project mounting host file system without required SCC.

Describe the solution you'd like
I would like the project to provide definition of SCC manifest required on OpenShift. This manifest can be in its own file, so people who don't need it don't need to bother with it. An example of that may be macvtap-cni: https://github.com/kubevirt/macvtap-cni/blob/main/templates/scc.yaml.in

Describe alternatives you've considered
The SCC can be defined by the consumer too. The only issue is that the SCC then can get easily out of sync. Listed SCCs depend on the attributes set in component's DaemonSet spec. This approach was taken with OVS CNI in CNAO: https://github.com/kubevirt/cluster-network-addons-operator/blob/main/hack/components/bump-ovs-cni.sh#L73

[BUG] In packages page provides images for unknown arch

Describe the bug
A clear and concise description of what the bug is.
I can not understand the the second docker pull command.
image
As well as the command fail when pull a docker image.
image

Expected behavior
A clear and concise description of what you expected to happen.
May be the unknown/unknown could be linux/arm or something and can pull image successful.

To Reproduce
Steps to reproduce the behavior:
1.
2.
3.

Environment:

  • multus-dynamic-networks-controller version: N/A
  • Kubernetes version (use kubectl version): N/A
  • Network-attachment-definition: N/A
  • OS (e.g. from /etc/os-release): N/A
  • Controller configuration (criSocketPath / multusSocketPath): N/A
  • Kernel (e.g. uname -a): N/A
  • Others: N/A

Additional info / context
Add any other information / context about the problem here.

Add `containerd` support

Mount the containerd socket into the controller pods, and read the netns of the mutated pod via containerd API.

[BUG] ignore host networked pods

Describe the bug
We are handling updates to pods running on the host network. There is nothing we can do for host networked pods.

Expected behavior
Updates to host networked pods should be ignored and logged by the controller.

To Reproduce
Steps to reproduce the behavior:

  1. Provision a host networked pod
  2. Attach a network interface to it

Environment:

  • multus-dynamic-networks-controller version: N/A
  • Kubernetes version (use kubectl version): N/A
  • Network-attachment-definition: N/A
  • OS (e.g. from /etc/os-release): N/A
  • Controller configuration (criSocketPath / multusSocketPath): N/A
  • Kernel (e.g. uname -a): N/A
  • Others: N/A

Additional info / context
Add any other information / context about the problem here.

[BUG] the net-attach-def used for hotplug should not require the `name` attribute

Describe the bug
According to the multi-net spec - section 3.4.2 - the name attribute of the net-attach-def .Spec.Config attribute could be inferred from the metadata attribute.

This currently is a required attribute.

Expected behavior
Infer the spec.config.name attribute from the metadata.name.

To Reproduce
Steps to reproduce the behavior:

  1. provision a net-attach-def without a name attribute in its spec.config
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: macvlan1-config
spec:
  config: '{
            "cniVersion": "0.4.0",
            "plugins": [
                {
                    "type": "macvlan",
                    "master": "eth1",
                    "mode": "bridge",
                }
            ]
        }'
  1. attempt to hotplug an interface into a running pod which uses the aforementioned net-attach-def
  2. it will fail with the following error:
ERRORED: error configuring pod [default/virt-launcher-vmi-a-cccsh] networking: missing network name:

Environment:

  • multus-dynamic-networks-controller version: N/A
  • Kubernetes version (use kubectl version): N/A
  • Network-attachment-definition: N/A
  • OS (e.g. from /etc/os-release): N/A
  • Controller configuration (criSocketPath / multusSocketPath): N/A
  • Kernel (e.g. uname -a): N/A
  • Others: N/A

Additional info / context
Add any other information / context about the problem here.

[BUG] default container namespace "k8s.io" should change to "moby"?

Describe the bug
failed to attach interface

here's the error logs

I1108 07:47:08.265374       1 pod.go:202] pod [default/virt-launcher-case1-singlexbjrg-ltftr] updated
I1108 07:47:08.265419       1 network-selection-elements.go:34] parsePodNetworkAnnotation: [{"interface":"net1","name":"vnop2","namespace":"default"}], default
I1108 07:47:08.265474       1 pod.go:217] 1 attachments to add to pod default/virt-launcher-case1-singlexbjrg-ltftr
E1108 07:47:08.267290       1 pod.go:221] failed to figure out the pod's network namespace: failed to get netns for container [903d061e03dee7ef975706352eed3111a1eccdb9185f056196b014f89b79148b]: container "903d061e03dee7ef975706352eed3111a1eccdb9185f056196b014f89b79148b" in namespace "k8s.io": not found

such container-id seems only exist in moby namespace as for docker

Expected behavior
pod added second interface dynamically

To Reproduce
Steps to reproduce the behavior:

  1. create vmi
  2. kubectl edit vmi pod, add k8s.v1.cni.cncf.io/networks: '[{"interface":"net1","name":"flannel","namespace":"default"}]'
  3. kubectl exec into pod, no interface added

Environment:

  • multus-dynamic-networks-controller version: N/A
  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:30:10Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:42:41Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
  • Network-attachment-definition: N/A
  • OS (e.g. from /etc/os-release): CentOS Linux release 7.9.2009
  • Controller configuration (criSocketPath / multusSocketPath): /run/multus/multus.sock
  • Kernel (e.g. uname -a): Linux stack1.cpp.zll.qianxin-inc.cn 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
  • Others: N/A

Additional info / context

docker version

[root@stack1 ~]# docker version
Client: Docker Engine - Community
 Version:           20.10.12
 API version:       1.41
 Go version:        go1.16.12
 Git commit:        e91ed57
 Built:             Mon Dec 13 11:45:41 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.12
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.12
  Git commit:       459d0df
  Built:            Mon Dec 13 11:44:05 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.13
  GitCommit:        9cc61520f4cd876b86e77edfeb88fbcd536d1f9d
 runc:
  Version:          1.0.3
  GitCommit:        v1.0.3-0-gf46b6ba
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Set up e2e testing framework for `cri-o` runtime

  • deploy a K8S cluster using a crio container runtime
  • deploy multus on the cluster
    • deploying via CNAO is not an option - at least until CNAO tracks the thick plugin option. This is tracked in this PR.
  • add multus daemon socket to the controller
  • successfully provision the manifests on the examples directory

NOTES: kubevirtci runs the nodes as VMs within containers. This means we'd need to setup the github actions to have (at least) qemu. Not sure if emulating the kvm device would give enough performance to run the tests or not. More work is required...

[BUG] e2e kind cluster deployment fails

Describe the bug
The e2e kind cluster fails to deploy.

The e2e tests are broken since multus merged k8snetworkplumbingwg/multus-cni#1054.

Expected behavior
The e2e kind cluster should deploy successfully.

To Reproduce
Steps to reproduce the behavior:

  1. Run hack/e2e-kind-cluster-setup.sh

Environment:

  • multus-dynamic-networks-controller version: N/A
  • Kubernetes version (use kubectl version): N/A
  • Network-attachment-definition: N/A
  • OS (e.g. from /etc/os-release): N/A
  • Controller configuration (criSocketPath / multusSocketPath): N/A
  • Kernel (e.g. uname -a): N/A
  • Others: N/A

Additional info / context
Add any other information / context about the problem here.

[RFE] assure the multus-cni server is reachable

Is your feature request related to a problem? Please describe.
Whenever the multus-cni pod restarts it mounts a new multus socket; a running controller will be unaware of this, and will not be able to use the multus server API as a result.

Describe the solution you'd like
Adding a liveness probe to check the liveness of the multus server pod.

Describe alternatives you've considered
None.

Additional context
None.

[BUG] container images are built without the commit SHA

Describe the bug
When I check the commit SHA from a container image provided by this repo's package, I don't see the commit SHA from which the container image was built.

This NONE is the default value specified in the container image.

Expected behavior
The command below should output the commit ID from which the container image was built.

To Reproduce
Steps to reproduce the behavior:

skopeo inspect docker://ghcr.io/k8snetworkplumbingwg/multus-dynamic-networks-controller:latest-amd64 -f '{{ index .Labels "multi.GIT_SHA" }}'
NONE

Environment:

  • multus-dynamic-networks-controller version: N/A
  • Kubernetes version (use kubectl version): N/A
  • Network-attachment-definition: N/A
  • OS (e.g. from /etc/os-release): N/A
  • Controller configuration (criSocketPath / multusSocketPath): N/A
  • Kernel (e.g. uname -a): N/A
  • Others: N/A

Additional info / context
Add any other information / context about the problem here.

CNI interaction

  • mount the multus daemon socket into the dynamic networks controller pod
  • for networks to add / remove:
    • trigger delegate ADD / REMOVE from the multus server, over the multus unix socket
    • update the dynamic annotations on the pod (must yet decide on them)
  • unit tests
  • e2e tests (simple scenario)

e2e test MVP

Add e2e tests that show the MVP functionality:

  • hotplug an interface to a running pod
  • remove an interface from a running pod

These tests should be golang based.

[RFE] Drop the `privileged` security context from the controller pods

Is your feature request related to a problem? Please describe.
The controller pods are currently scheduled using the privileged security context.

I don't think there is any need for it, since the controller just needs to invoke the multus delegate endpoint,
and translate the result back to the user.

Describe the solution you'd like
Drop the privileged security context from the controller pods

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.