Git Product home page Git Product logo

macvtap-cni's Introduction

macvtap CNI

This plugin allows users to define Kubernetes networks on top of existing host interfaces. By using the macvtap plugin, the user is able to directly connect the pod to a host interface and consume it through a tap device.

The main use cases are virtualization workloads inside the pod driven by Kubevirt but it can also be used directly with QEMU/libvirt and it might be suitable combined with other virtualization backends.

macvtap CNI includes a device plugin to properly expose the macvtap interfaces to the pods. A metaplugin such as Multus gets the name of the interface allocated by the device plugin and is responsible to invoke the cni plugin with that name as deviceID.

Deployment

The device plugin is configured through environment variable DP_MACVTAP_CONF. The value is a json array and each element of the array is a separate resource to be made available:

  • name (string, required) the name of the resource
  • lowerDevice (string, required) the name of the macvtap lower link
  • mode (string, optional, default=bridge) the macvtap operating mode
  • capacity (uint, optional, default=100) the capacity of the resource

In the default deployment, this configuration shall be provided through a config map, for example:

kind: ConfigMap
apiVersion: v1
metadata:
  name: macvtap-deviceplugin-config
data:
  DP_MACVTAP_CONF: |
    [ {
        "name" : "dataplane",
        "lowerDevice" : "eth0",
        "mode": "bridge",
        "capacity" : 50
    } ]
$ kubectl apply -f https://raw.githubusercontent.com/kubevirt/macvtap-cni/main/examples/macvtap-deviceplugin-config.yaml
configmap "macvtap-deviceplugin-config" created

This configuration will result in up to 50 macvtap interfaces being offered for consumption, using eth0 as the lower device, in bridge mode, and under resource name macvtap.network.kubevirt.io/dataplane.

A configuration consisting of an empty json array, as proposed in the default example, causes the device plugin to expose one resource for every physical link or bond on each node. For example, if a node has a physical link called eth0, a resourced named macvtap.network.kubevirt.io/eth0 would be made available to use macvtap interfaces with eth0 as the lower device

The macvtap CNI can be deployed using the proposed daemon set:

$ kubectl apply -f https://raw.githubusercontent.com/kubevirt/macvtap-cni/main/manifests/macvtap.yaml
daemonset "macvtap-cni" created

$ kubectl get pods
NAME                                 READY     STATUS    RESTARTS   AGE
macvtap-cni-745x4                      1/1    Running           0    5m

This will result in the CNI being installed and device plugin running on all nodes.

There is also a template available to parameterize the deployment with different configuration options.

Usage

macvtap CNI is best used with Multus by defining a NetworkAttachmentDefinition:

kind: NetworkAttachmentDefinition
apiVersion: k8s.cni.cncf.io/v1
metadata:
  name: dataplane
  annotations:
    k8s.v1.cni.cncf.io/resourceName: macvtap.network.kubevirt.io/dataplane
spec:
  config: '{
      "cniVersion": "0.3.1",
      "name": "dataplane",
      "type": "macvtap",
      "mtu": 1500
    }'

The CNI config json allows the following parameters:

  • name (string, required): the name of the network. Optional when used within a NetworkAttachmentDefinition, as Multus provides the name in that case.
  • type (string, required): "macvtap".
  • mac (string, optional): mac address to assign to the macvtap interface.
  • mtu (integer, optional): mtu to set in the macvtap interface.
  • deviceID (string, required): name of an existing macvtap host interface, which will be moved to the correct net namespace and configured. Optional when used within a NetworkAttachmentDefinition, as Multus provides the deviceID in that case.
  • promiscMode (bool, optional): enable promiscous mode on the pod side of the veth. Defaults to false.

A pod can be attached to that network which would result in the pod having the corresponding macvtap interface:

apiVersion: v1
kind: Pod
metadata:
  name: pod
  annotations:
    k8s.v1.cni.cncf.io/networks: dataplane
spec:
  containers:
  - name: busybox
    image: busybox
    command: ["/bin/sleep", "1800"]
    resources:
      limits:
        macvtap.network.kubevirt.io/dataplane: 1 

A mac can also be assigned to the interface through the network annotation:

apiVersion: v1
kind: Pod
metadata:
  name: pod-with-mac
  annotations:
    k8s.v1.cni.cncf.io/networks: |
      [
        {
          "name":"dataplane",
          "mac": "02:23:45:67:89:01"
        }
      ]
spec:
  containers:
  - name: busybox
    image: busybox
    command: ["/bin/sleep", "1800"]
    resources:
      limits:
        macvtap.network.kubevirt.io/dataplane: 1 

Note: The resource limit can be ommited from the pod definition if network-resources-injector is deployed in the cluster.

The device plugin can potentially be used by itself in case you only need the tap device in the pod and not the interface:

apiVersion: v1
kind: Pod
metadata:
  name: macvtap-consumer
spec:
  containers:
  - name: busybox
    image: busybox
    command: ["/bin/sleep", "123"]
    resources:
      limits:
        macvtap.network.kubevirt.io/dataplane: 1 

macvtap-cni's People

Contributors

davidcarrera avatar dependabot[bot] avatar jcaamano avatar kubevirt-bot avatar maiqueb avatar oshoval avatar phoracek avatar ramlavi avatar rhrazdil avatar xieyanker avatar zhuchenwang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

macvtap-cni's Issues

macvtap-cni's pod in CrashLoopBackOff when using k8s v1.25

What happened:
macvtap-cni's pod in CrashLoopBackOff after deploying
calico and than cnao in k8s 1.25 cluster.

What you expected to happen:
To see all cnao pods in running STATUS.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:
got the following massage after running:
kubectl logs macvtap-cni-jbrkl -ncluster-network-addons

Defaulted container "macvtap-cni" out of: macvtap-cni, install-cni (init)
I0921 08:04:13.894732 1095820 manager.go:42] Starting device plugin manager
I0921 08:04:13.894819 1095820 manager.go:46] Registering for system signal notifications
I0921 08:04:13.895728 1095820 manager.go:52] Registering for notifications of filesystem changes in device plugin directory
I0921 08:04:13.895987 1095820 manager.go:60] Starting Discovery on new plugins
I0921 08:04:13.896019 1095820 manager.go:66] Handling incoming signals
I0921 08:04:13.896089 1095820 lister.go:67] Read configuration map[]
I0921 08:04:13.897369 1095820 manager.go:71] Received new list of plugins: [eno1 eno2 eno3 eno4]
I0921 08:04:13.897470 1095820 manager.go:110] Adding a new plugin "eno4"
I0921 08:04:13.897485 1095820 lister.go:167] Creating device plugin with config {Name:eno4 LowerDevice:eno4 Mode:bridge Capacity:100}
I0921 08:04:13.897559 1095820 plugin.go:64] eno4: Starting plugin server
I0921 08:04:13.897568 1095820 plugin.go:95] eno4: Starting the DPI gRPC server
I0921 08:04:13.897752 1095820 manager.go:110] Adding a new plugin "eno2"
I0921 08:04:13.897785 1095820 lister.go:167] Creating device plugin with config {Name:eno2 LowerDevice:eno2 Mode:bridge Capacity:100}
I0921 08:04:13.897815 1095820 plugin.go:64] eno2: Starting plugin server
I0921 08:04:13.897825 1095820 plugin.go:95] eno2: Starting the DPI gRPC server
I0921 08:04:13.897907 1095820 manager.go:110] Adding a new plugin "eno1"
I0921 08:04:13.897947 1095820 lister.go:167] Creating device plugin with config {Name:eno1 LowerDevice:eno1 Mode:bridge Capacity:100}
I0921 08:04:13.897947 1095820 manager.go:110] Adding a new plugin "eno3"
I0921 08:04:13.897982 1095820 plugin.go:64] eno1: Starting plugin server
I0921 08:04:13.898002 1095820 plugin.go:95] eno1: Starting the DPI gRPC server
I0921 08:04:13.898003 1095820 lister.go:167] Creating device plugin with config {Name:eno3 LowerDevice:eno3 Mode:bridge Capacity:100}
I0921 08:04:13.898037 1095820 plugin.go:64] eno3: Starting plugin server
I0921 08:04:13.898045 1095820 plugin.go:95] eno3: Starting the DPI gRPC server
I0921 08:04:13.898405 1095820 plugin.go:113] eno3: Serving requests...
I0921 08:04:13.898405 1095820 plugin.go:113] eno4: Serving requests...
I0921 08:04:13.898494 1095820 plugin.go:113] eno1: Serving requests...
I0921 08:04:13.898569 1095820 plugin.go:113] eno2: Serving requests...
I0921 08:04:23.900005 1095820 plugin.go:129] eno4: Registering the DPI with Kubelet
I0921 08:04:23.900089 1095820 plugin.go:129] eno3: Registering the DPI with Kubelet
I0921 08:04:23.900175 1095820 plugin.go:129] eno1: Registering the DPI with Kubelet
I0921 08:04:23.900282 1095820 plugin.go:129] eno2: Registering the DPI with Kubelet
I0921 08:04:23.900412 1095820 plugin.go:141] eno1: Registration for endpoint macvtap.network.kubevirt.io_eno1
I0921 08:04:23.900427 1095820 plugin.go:141] eno3: Registration for endpoint macvtap.network.kubevirt.io_eno3
I0921 08:04:23.900436 1095820 plugin.go:141] eno4: Registration for endpoint macvtap.network.kubevirt.io_eno4
I0921 08:04:23.900460 1095820 plugin.go:141] eno2: Registration for endpoint macvtap.network.kubevirt.io_eno2
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x1 pc=0x7fd2ab]

goroutine 145 [running]:
k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1.(*DevicePluginOptions).MarshalToSizedBuffer(...)
	/go/src/github.com/kubevirt/macvtap-cni/vendor/k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1/api.pb.go:1546
k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1.(*DevicePluginOptions).Marshal(0x0)
	/go/src/github.com/kubevirt/macvtap-cni/vendor/k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1/api.pb.go:1529 +0x6b
google.golang.org/protobuf/internal/impl.legacyMarshal({{}, {0x9e4858, 0xc00031c300}, {0x0, 0x0, 0x0}, 0x0})
	/go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/protobuf/internal/impl/legacy_message.go:404 +0xa2
google.golang.org/protobuf/proto.MarshalOptions.marshal({{}, 0x80, 0x0, 0x0}, {0x0, 0x90ae20, 0x0}, {0x9e4858, 0xc00031c300})
	/go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/protobuf/proto/encode.go:163 +0x27b
google.golang.org/protobuf/proto.MarshalOptions.MarshalAppend({{}, 0x20, 0xae, 0x90}, {0x0, 0x0, 0x0}, {0x9cba80, 0xc00031c300})
	/go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/protobuf/proto/encode.go:122 +0x79
github.com/golang/protobuf/proto.marshalAppend({0x0, 0x0, 0x0}, {0x7f93a1bce928, 0x0}, 0x58)
	/go/src/github.com/kubevirt/macvtap-cni/vendor/github.com/golang/protobuf/proto/wire.go:40 +0xa5
github.com/golang/protobuf/proto.Marshal(...)
	/go/src/github.com/kubevirt/macvtap-cni/vendor/github.com/golang/protobuf/proto/wire.go:23
google.golang.org/grpc/encoding/proto.codec.Marshal({}, {0x90ae20, 0x0})
	/go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/grpc/encoding/proto/proto.go:45 +0x4e
google.golang.org/grpc.encode({0x7f93a1c99068, 0xd7c718}, {0x90ae20, 0x0})
	/go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/grpc/rpc_util.go:594 +0x44
google.golang.org/grpc.(*Server).sendResponse(0xc000140000, {0x9e0e60, 0xc00024c000}, 0xc000254000, {0x90ae20, 0x0}, {0x0, 0x0}, 0x0, {0x0, ...})
	/go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/grpc/server.go:1082 +0x18e
google.golang.org/grpc.(*Server).processUnaryRPC(0xc000140000, {0x9e0e60, 0xc00024c000}, 0xc000254000, 0xc00013c120, 0xd3f860, 0x0)
	/go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/grpc/server.go:1332 +0xd93
google.golang.org/grpc.(*Server).handleStream(0xc000140000, {0x9e0e60, 0xc00024c000}, 0xc000254000, 0x0)
	/go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/grpc/server.go:1626 +0xa2a
google.golang.org/grpc.(*Server).serveStreams.func1.2()
	/go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/grpc/server.go:941 +0x98
created by google.golang.org/grpc.(*Server).serveStreams.func1
	/go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/grpc/server.go:939 +0x294
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x1 pc=0x7fd2ab]

goroutine 135 [running]:
k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1.(*DevicePluginOptions).MarshalToSizedBuffer(...)
	/go/src/github.com/kubevirt/macvtap-cni/vendor/k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1/api.pb.go:1546
k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1.(*DevicePluginOptions).Marshal(0x0)
	/go/src/github.com/kubevirt/macvtap-cni/vendor/k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1/api.pb.go:1529 +0x6b
google.golang.org/protobuf/internal/impl.legacyMarshal({{}, {0x9e4858, 0xc00031c300}, {0x0, 0x0, 0x0}, 0x0})
	/go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/protobuf/internal/impl/legacy_message.go:404 +0xa2
google.golang.org/protobuf/proto.MarshalOptions.marshal({{}, 0x80, 0x0, 0x0}, {0x0, 0x90ae20, 0x0}, {0x9e4858, 0xc00031c300})
	/go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/protobuf/proto/encode.go:163 +0x27b
google.golang.org/protobuf/proto.MarshalOptions.MarshalAppend({{}, 0x20, 0xae, 0x90}, {0x0, 0x0, 0x0}, {0x9cba80, 0xc00031c300})
	/go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/protobuf/proto/encode.go:122 +0x79
github.com/golang/protobuf/proto.marshalAppend({0x0, 0x0, 0x0}, {0x7f93a1bce928, 0x0}, 0x58)
	/go/src/github.com/kubevirt/macvtap-cni/vendor/github.com/golang/protobuf/proto/wire.go:40 +0xa5
github.com/golang/protobuf/proto.Marshal(...)
	/go/src/github.com/kubevirt/macvtap-cni/vendor/github.com/golang/protobuf/proto/wire.go:23
google.golang.org/grpc/encoding/proto.codec.Marshal({}, {0x90ae20, 0x0})
	/go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/grpc/encoding/proto/proto.go:45 +0x4e
google.golang.org/grpc.encode({0x7f93a1c99068, 0xd7c718}, {0x90ae20, 0x0})
	/go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/grpc/rpc_util.go:594 +0x44
google.golang.org/grpc.(*Server).sendResponse(0xc000346000, {0x9e0e60, 0xc000800180}, 0xc0002fa000, {0x90ae20, 0x0}, {0x0, 0x0}, 0x0, {0x0, ...})
	/go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/grpc/server.go:1082 +0x18e
google.golang.org/grpc.(*Server).processUnaryRPC(0xc000346000, {0x9e0e60, 0xc000800180}, 0xc0002fa000, 0xc000342120, 0xd3f860, 0x0)
	/go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/grpc/server.go:1332 +0xd93
google.golang.org/grpc.(*Server).handleStream(0xc000346000, {0x9e0e60, 0xc000800180}, 0xc0002fa000, 0x0)
	/go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/grpc/server.go:1626 +0xa2a
google.golang.org/grpc.(*Server).serveStreams.func1.2()
	/go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/grpc/server.go:941 +0x98
created by google.golang.org/grpc.(*Server).serveStreams.func1
	/go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/grpc/server.go:939 +0x294

cluster-up fails due to docker.io rate limits

What happened:
Tried to cluster-up locally, it fails due to docker.io limits

What you expected to happen:
Should pass

How to reproduce it (as minimally and precisely as possible):
Just cluster-up when you hit the limit of docker.io

Anything else we need to know?:
Tried to bump kubevirtci, it did fix this issue,
but then the cluster-sync was broken always, didn't investigate deeper,
just change the KUBEVIRTCI_TAG and see this behavior.

macvtap-cni pods panicking - signal SIGSEGV: segmentation violation code=0x1 addr=0x1 pc=0x88945c

What happened:
Daemonset pods on all nodes crashing with error signal SIGSEGV: segmentation violation code=0x1 addr=0x1 pc=0x88945c

What you expected to happen:
Pods to run without crashing.

How to reproduce it (as minimally and precisely as possible):
Deploy macvtap-cni to cluster via cluster-network-addons operator.

Anything else we need to know?:
bare metal talos cluster on intel nucs
k8s: 1.27.4
linux: 6.1.61-talos
kernel module macvtap loaded

pod logs

I1226 06:05:23.905858   21412 manager.go:42] Starting device plugin manager
I1226 06:05:23.905904   21412 manager.go:46] Registering for system signal notifications
I1226 06:05:23.906078   21412 manager.go:52] Registering for notifications of filesystem changes in device plugin directory
I1226 06:05:23.906128   21412 manager.go:60] Starting Discovery on new plugins
I1226 06:05:23.906146   21412 manager.go:66] Handling incoming signals
I1226 06:05:23.906180   21412 lister.go:67] Read configuration map[]
I1226 06:05:23.908043   21412 manager.go:71] Received new list of plugins: [bond0 teql0 eth0 eth1 thunderbolt0 thunderbolt1]
I1226 06:05:23.908108   21412 manager.go:110] Adding a new plugin "thunderbolt1"
I1226 06:05:23.908115   21412 lister.go:167] Creating device plugin with config {Name:thunderbolt1 LowerDevice:thunderbolt1 Mode:bridge Capacity:100}
I1226 06:05:23.908164   21412 plugin.go:64] thunderbolt1: Starting plugin server
I1226 06:05:23.908169   21412 plugin.go:95] thunderbolt1: Starting the DPI gRPC server
I1226 06:05:23.908213   21412 manager.go:110] Adding a new plugin "eth0"
I1226 06:05:23.908231   21412 lister.go:167] Creating device plugin with config {Name:eth0 LowerDevice:eth0 Mode:bridge Capacity:100}
I1226 06:05:23.908280   21412 plugin.go:64] eth0: Starting plugin server
I1226 06:05:23.908288   21412 plugin.go:95] eth0: Starting the DPI gRPC server
I1226 06:05:23.908238   21412 manager.go:110] Adding a new plugin "bond0"
I1226 06:05:23.908341   21412 manager.go:110] Adding a new plugin "eth1"
I1226 06:05:23.908368   21412 lister.go:167] Creating device plugin with config {Name:bond0 LowerDevice:bond0 Mode:bridge Capacity:100}
I1226 06:05:23.908391   21412 plugin.go:64] bond0: Starting plugin server
I1226 06:05:23.908398   21412 plugin.go:95] bond0: Starting the DPI gRPC server
I1226 06:05:23.908390   21412 lister.go:167] Creating device plugin with config {Name:eth1 LowerDevice:eth1 Mode:bridge Capacity:100}
I1226 06:05:23.908401   21412 manager.go:110] Adding a new plugin "teql0"
I1226 06:05:23.908441   21412 plugin.go:64] eth1: Starting plugin server
I1226 06:05:23.908436   21412 lister.go:167] Creating device plugin with config {Name:teql0 LowerDevice:teql0 Mode:bridge Capacity:100}
I1226 06:05:23.908451   21412 plugin.go:95] eth1: Starting the DPI gRPC server
I1226 06:05:23.908480   21412 plugin.go:64] teql0: Starting plugin server
I1226 06:05:23.908491   21412 plugin.go:95] teql0: Starting the DPI gRPC server
I1226 06:05:23.908492   21412 plugin.go:113] thunderbolt1: Serving requests...
I1226 06:05:23.908489   21412 manager.go:110] Adding a new plugin "thunderbolt0"
I1226 06:05:23.908537   21412 plugin.go:113] bond0: Serving requests...
I1226 06:05:23.908526   21412 lister.go:167] Creating device plugin with config {Name:thunderbolt0 LowerDevice:thunderbolt0 Mode:bridge Capacity:100}
I1226 06:05:23.908584   21412 plugin.go:64] thunderbolt0: Starting plugin server
I1226 06:05:23.908606   21412 plugin.go:95] thunderbolt0: Starting the DPI gRPC server
I1226 06:05:23.908637   21412 plugin.go:113] eth1: Serving requests...
I1226 06:05:23.908644   21412 plugin.go:113] eth0: Serving requests...
I1226 06:05:23.908683   21412 plugin.go:113] teql0: Serving requests...
I1226 06:05:23.908752   21412 plugin.go:113] thunderbolt0: Serving requests...
I1226 06:05:33.910356   21412 plugin.go:129] eth0: Registering the DPI with Kubelet
I1226 06:05:33.910376   21412 plugin.go:129] thunderbolt1: Registering the DPI with Kubelet
I1226 06:05:33.910448   21412 plugin.go:129] bond0: Registering the DPI with Kubelet
I1226 06:05:33.910504   21412 plugin.go:129] eth1: Registering the DPI with Kubelet
I1226 06:05:33.910567   21412 plugin.go:129] thunderbolt0: Registering the DPI with Kubelet
I1226 06:05:33.910703   21412 plugin.go:129] teql0: Registering the DPI with Kubelet
I1226 06:05:33.910892   21412 plugin.go:141] eth0: Registration for endpoint macvtap.network.kubevirt.io_eth0
I1226 06:05:33.910969   21412 plugin.go:141] thunderbolt1: Registration for endpoint macvtap.network.kubevirt.io_thunderbolt1
I1226 06:05:33.911010   21412 plugin.go:141] bond0: Registration for endpoint macvtap.network.kubevirt.io_bond0
I1226 06:05:33.911012   21412 plugin.go:141] thunderbolt0: Registration for endpoint macvtap.network.kubevirt.io_thunderbolt0
I1226 06:05:33.911159   21412 plugin.go:141] teql0: Registration for endpoint macvtap.network.kubevirt.io_teql0
I1226 06:05:33.911228   21412 plugin.go:141] eth1: Registration for endpoint macvtap.network.kubevirt.io_eth1
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x1 pc=0x88945c]

goroutine 105 [running]:
k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1.(*DevicePluginOptions).MarshalToSizedBuffer(...)
    /go/src/github.com/kubevirt/macvtap-cni/vendor/k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1/api.pb.go:1545
k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1.(*DevicePluginOptions).Marshal(0x0, 0x9b7860, 0x0, 0x7f28929f9b20, 0x0, 0x9ae501)
    /go/src/github.com/kubevirt/macvtap-cni/vendor/k8s.io/kubelet/pkg/apis/deviceplugin/v1beta1/api.pb.go:1528 +0x7c
google.golang.org/grpc/encoding/proto.codec.Marshal(0x9b7860, 0x0, 0x967660, 0x7f28929f6918, 0x40a49f, 0xc000010000, 0x94f140)
    /go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/grpc/encoding/proto/proto.go:70 +0x199
google.golang.org/grpc.encode(0x7f28929f6918, 0xdb5858, 0x9b7860, 0x0, 0xdb5858, 0xc000010000, 0x963240, 0x98b800, 0x7f2892bc57d0)
    /go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/grpc/rpc_util.go:538 +0x52
google.golang.org/grpc.(*Server).sendResponse(0xc000532000, 0xa8a018, 0xc0002d8780, 0xc0005ec000, 0x9b7860, 0x0, 0x0, 0x0, 0xc00012011c, 0x0, ...)
    /go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/grpc/server.go:983 +0x91
google.golang.org/grpc.(*Server).processUnaryRPC(0xc000532000, 0xa8a018, 0xc0002d8780, 0xc0005ec000, 0xc00052c120, 0xd7a620, 0x0, 0x0, 0x0)
    /go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/grpc/server.go:1229 +0x625
google.golang.org/grpc.(*Server).handleStream(0xc000532000, 0xa8a018, 0xc0002d8780, 0xc0005ec000, 0x0)
    /go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/grpc/server.go:1517 +0xd0c
google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc000026060, 0xc000532000, 0xa8a018, 0xc0002d8780, 0xc0005ec000)
    /go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/grpc/server.go:859 +0xab
created by google.golang.org/grpc.(*Server).serveStreams.func1
    /go/src/github.com/kubevirt/macvtap-cni/vendor/google.golang.org/grpc/server.go:857 +0x1fd
Stream closed EOF for cluster-network-addons/macvtap-cni-q2xl8 (macvtap-cni)

`name` property of `DP_MACVTAP_CONF` can't exceed 10 characters

What happened:
The name property of DP_MACVTAP_CONF appears to have a character limit of 10 characters. I'm not sure if this due to it or the annotation that has to be set on the NetworkAttachmentDefinition.

What you expected to happen:
I didn't expect this character limit.

How to reproduce it (as minimally and precisely as possible):

  1. Create macvtap device plugin configuration.
    NOTE: If you make the name field dataplanea and update NetworkAttachmentDefinition to be k8s.v1.cni.cncf.io/resourceName: macvtap.network.kubevirt.io/dataplanea it will work.
kind: ConfigMap
apiVersion: v1
metadata:
  name: macvtap-deviceplugin-config
data:
  DP_MACVTAP_CONF: |
    [
      {
        "name" : "dataplaneab",
        "lowerDevice" : "isol",
        "mode": "bridge",
        "capacity" : 50
      },
    ]
  1. Deploy macvtap DaemonSet using: https://github.com/kubevirt/macvtap-cni/blob/main/manifests/macvtap.yaml
  2. Deploy NetworkAttachmentDefinition
kind: NetworkAttachmentDefinition
apiVersion: k8s.cni.cncf.io/v1
metadata:
  name: isolated-net
  annotations:
    k8s.v1.cni.cncf.io/resourceName: macvtap.network.kubevirt.io/dataplaneab
spec:
  config: '{
      "cniVersion": "0.3.1",
      "name": "isolated-net",
      "type": "macvtap",
      "ipam": {
              "type": "host-local",
              "subnet": "172.31.0.0/20",
              "rangeStart": "172.31.12.1",
              "rangeEnd": "172.31.15.254",
              "routes": [
                { "dst": "0.0.0.0/0" }
              ],
              "gateway": "172.31.0.1"
            }
    }'
  1. Deploy VM
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
  name: vmi-test
spec:
  domain:
    resources:
      requests:
        memory: 64M
    devices:
      disks:
      - name: containerdisk
        disk:
          bus: virtio
      - name: cloudinitdisk
        disk:
          bus: virtio
      interfaces:
        - name: isolated-network
          macvtap: {}
  networks:
    - name: isolated-network
      multus:
        networkName: isolated-net
  volumes:
    - name: containerdisk
      containerDisk:
        image: kubevirt/cirros-container-disk-demo:latest
    - name: cloudinitdisk
      cloudInitNoCloud:
        userData: |
            #!/bin/sh

            echo 'printed from cloud-init userdata'

kubectl describe prints out

Status:           Failed
Reason:           UnexpectedAdmissionError
Message:          Pod Allocate failed due to rpc error: code = Unknown desc = numerical result out of range, which is unexpected

Looking at the node where its scheduled it looks like the macvtap wasn't allocated.

Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource                                  Requests    Limits
  --------                                  --------    ------
  cpu                                       702m (1%)   770m (1%)
  memory                                    815Mi (0%)  320Mi (0%)
  ephemeral-storage                         0 (0%)      0 (0%)
  hugepages-1Gi                             0 (0%)      0 (0%)
  hugepages-2Mi                             0 (0%)      0 (0%)
  devices.kubevirt.io/kvm                   0           0
  macvtap.network.kubevirt.io/dataplane      0           0
  macvtap.network.kubevirt.io/dataplanea     0           0
  macvtap.network.kubevirt.io/dataplaneab    0           0

NOTE: It also looks like it leaves macvtap.network.kubevirt.io/ resources from previous runs. How does one remove these?

Additional context:
Add any other context about the problem here.

Environment:

  • KubeVirt version (use virtctl version): 1.1.0
  • Kubernetes version (use kubectl version): 1.23.9
  • VM or VMI specifications: N/A
  • Cloud provider or hardware configuration: Baremetal
  • OS (e.g. from /etc/os-release): CentOS 7
  • Kernel (e.g. uname -a): 3.10.0-1160.11.1.el7.x86_64
  • Other Tools: Multus Thick client(4.0.2)

macvtap-cni to connect on layer 2 with other cnis

What happened:

We would like to establish Layer 2 connectivity between different Kubernetes Pods and VMs running on a Kubernetes cluster. This would allow us to deploy networking devices into Kubernetes as routers and switches. This is especially useful for networking labs.
It would be fantastic if the macvtap-cni would allow connection to other cnis as well (like kube-ovn) instead of just physical interfaces.

What you expected to happen:

Allow the NetworkAttachmentDefinitionton to use other NetworkAttachmentDefinitions instead of a physical interface.

I am no expert, but something like this would be my vision...

--- 
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: other-cni # This is the CNI i would like to use 
  namespace: default
spec:
  config: '{
      "cniVersion": "0.3.0",
      "type": "kube-ovn",
      "server_socket": "/run/openvswitch/kube-ovn-daemon.sock",
      "provider": "other-cni.default.ovn"
    }'
---
kind: NetworkAttachmentDefinition
apiVersion: k8s.cni.cncf.io/v1
metadata:
  name: net1 
  annotations: # I want to use another CNI here instead of a physical interface
    k8s.v1.cni.cncf.io/networkName: other-cni.default.ovn 
spec:
  config: '{
      "cniVersion": "0.3.1",
      "name": "net1",
      "type": "macvtap",
      "mtu": 1500,
      "promiscMode": true
    }'
---
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  name: vm-ubuntu-1
spec:
  running: true
  template:
    metadata:
      labels:
        special: vmi-macvtap
    spec:
      nodeSelector:
        kubernetes.io/hostname: node1
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
          - name: default
            masquerade: {}
          - name: l2-network
            macvtap: {} # MacVtap as usual to allow l2 thorugh the virt-launcher
        machine:
          type: ""
        resources:
          requests:
            memory: 1024M
      networks:
      - name: default
        pod: {}
      - name: l2-network
        multus: # Secondary multus network
          networkName: net1

      terminationGracePeriodSeconds: 0
      volumes:
        - name: containerdisk
          containerDisk:
            image: quay.io/containerdisks/ubuntu:22.04
        - name: cloudinitdisk
          cloudInitNoCloud:
            networkData: |
              version: 2
              ethernets:
                enp1s0:
                  dhcp4: true
                enp2s0:
                  addresses:
                    - 10.0.1.2/24
            userData: |-
              #cloud-config
              password: ubuntu
              chpasswd: { expire: False }
              ssh_authorized_keys:
                - ssh-rsa 
              packages: 
                - qemu-guest-agent
                - lldpd
                - nmap
              runcmd:
                - [ systemctl, start, qemu-guest-agent]

Anything else we need to know?:

Thanks in advance for any help!
and thanks already for you help in #97

MacVtap L2 Network connectivity (LLDP) only working while running tcpdump

What happened:
We are trying to establish L2 connectivity between KubeVirt VMs. MacVtap seems like a promising option for this, as it eliminates the bridge in the virt-launcher. When the VMs are successfully started, they can ping each other without a problem. Ping is possible on the same node and if the VMs are on different nodes.

Initially, the VMs do not see anyone with LLDP, and the underlying hypervisor or network switch sees both VMs. This can be seen in the screenshot below, which is from the Proxmox (sentinel) host that hosts the Kubernetes nodes, and it can see both the VM (vm-ubuntu-1) and the Kubernetes node (node1).

Proxmox (or core switch)
macvtap-proxmox-lldpd

What you expected to happen:

Now the interesting part comes. To debug this behavior, a tcpdump was started in the virtlaunchers net1 interface. This tcpdump is started using the network namespace of that container. As soon as this tcpdump is running, the VMs discovers the Proxmox via LLDP and each other, as long as they run on the same node.

macvtap-node1-tcpdump

For both VMs to discover each other, two tcpdumps will need to run, one for each VMs Net1 interface.

macvtap-vm-ubuntu-1-lldpd

How to reproduce it (as minimally and precisely as possible):

Enable Feature Gate
---
apiVersion: kubevirt.io/v1
kind: KubeVirt
metadata:
  name: kubevirt
  namespace: kubevirt
spec:
  configuration:
    developerConfiguration: 
      featureGates:
        - Macvtap
     
Install macvtap-cni
kubectl apply -f https://github.com/kubevirt/cluster-network-addons-operator/releases/download/v0.85.0/namespace.yaml
kubectl apply -f https://github.com/kubevirt/cluster-network-addons-operator/releases/download/v0.85.0/network-addons-config.crd.yaml
kubectl apply -f https://github.com/kubevirt/cluster-network-addons-operator/releases/download/v0.85.0/operator.yaml
---
apiVersion: networkaddonsoperator.network.kubevirt.io/v1
kind: NetworkAddonsConfig
metadata:
  name: cluster
spec:
  macvtap: {}
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: macvtap-deviceplugin-config
data:
  DP_MACVTAP_CONF: '[]'
  
Deploy Test VMs
---
kind: NetworkAttachmentDefinition
apiVersion: k8s.cni.cncf.io/v1
metadata:
  name: net1 # it needs to be named net1, otherwise the VM doesn't start. 
  annotations:
    k8s.v1.cni.cncf.io/resourceName: macvtap.network.kubevirt.io/ens18
spec:
  config: '{
      "cniVersion": "0.3.1",
      "name": "net1",
      "type": "macvtap",
      "mtu": 1500
    }'
---
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  name: vm-ubuntu-1
spec:
  running: true
  template:
    metadata:
      labels:
        special: vmi-macvtap
    spec:
      nodeSelector:
        kubernetes.io/hostname: node1
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
          - name: default
            masquerade: {}
          - name: l2-network
            macvtap: {}
        machine:
          type: ""
        resources:
          requests:
            memory: 1024M
      networks:
      - name: default
        pod: {}
      - name: l2-network
        multus: # Secondary multus network
          networkName: net1

      terminationGracePeriodSeconds: 0
      volumes:
        - name: containerdisk
          containerDisk:
            image: quay.io/containerdisks/ubuntu:22.04
        - name: cloudinitdisk
          cloudInitNoCloud:
            networkData: |
              version: 2
              ethernets:
                enp1s0:
                  dhcp4: true
                enp2s0:
                  addresses:
                    - 10.0.1.2/24
            userData: |-
              #cloud-config
              password: ubuntu
              chpasswd: { expire: False }
              ssh_authorized_keys:
                - ssh-rsa <key>
              packages: 
                - qemu-guest-agent
                - lldpd
                - nmap
              runcmd:
                - [ systemctl, start, qemu-guest-agent]
---
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  name: vm-ubuntu-2
spec:
  running: true
  template:
    metadata:
      labels:
        special: vmi-macvtap
    spec:
      nodeSelector:
        kubernetes.io/hostname: node1
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
          - name: default
            masquerade: {}
          - name: l2-network
            macvtap: {}
        machine:
          type: ""
        resources:
          requests:
            memory: 1024M
      networks:
      - name: default
        pod: {}
      - name: l2-network
        multus: # Secondary multus network
          networkName: net1

      terminationGracePeriodSeconds: 0
      volumes:
        - name: containerdisk
          containerDisk:
            image: quay.io/containerdisks/ubuntu:22.04
        - name: cloudinitdisk
          cloudInitNoCloud:
            networkData: |
              version: 2
              ethernets:
                enp1s0:
                  dhcp4: true
                enp2s0:
                  addresses:
                    - 10.0.1.3/24
            userData: |-
              #cloud-config
              password: ubuntu
              chpasswd: { expire: False }
              ssh_authorized_keys:
                - ssh-rsa <key>
              packages: 
                - qemu-guest-agent
                - lldpd
                - nmap
              runcmd:
                - [ systemctl, start, qemu-guest-agent]

Start TCP Dump to enable L2 LLDP connectivity.

ip link show; ip -all netns exec ip link show
ip netns exec cni-<id> tcpdump -i net1 

Anything else we need to know?:
I have already posted this issue on Kubevirt but did not yet get a reply: kubevirt/kubevirt#9464

Unexpected name for interface

I wanted to follow up on this comment. I'm experiencing the same problem and can resolve it in the same way, but I'm assuming there's a better way.

What happened:

When trying to start a VM using macvtap, something is looking for an interface with a name that matches the name of the multus definition. However, the interface name that exists is in fact net1. So, I have name the multus definition net1 for it to work.

What you expected to happen:

Well, I'm not absolutely sure what's supposed to happen, but I guess I'm expecting the interface to be created with a name that matches what will be expected. I'm guessing that there may be some renaming involved somewhere? In fact, if I look at the dmesg output on the host, I see this:

[418841.802545] net1: renamed from vlan4Mvp46

I'm guessing that's part of the problem, but the error message is actually looking for the name of the multus definition, not vlan4Mvp46, so I'm not sure.

How to reproduce it (as minimally and precisely as possible):

Here's the NetworkAttachmentDefinition that fails:

apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: bobnet
  namespace: windows
  annotations:
    k8s.v1.cni.cncf.io/resourceName: macvtap.network.kubevirt.io/vlan4
spec:
  config: |-
    {
      "cniVersion": "0.3.1",
      "name": "bobnet",
      "type": "macvtap"
    }

Note this works if I change the name from "bobnet" to "net1". The message I get from virt-handler if I don't change the name to "net1" is this:

lstat /proc/3500641/root/sys/class/net/bobnet: no such file or directory

So, I think it's looking for an interface named "bobnet", and not finding one, it complains. But, if I change the name to "net1" it works (and everything works).

Here's the relevant part of the VirtualMachine definition:

     networks:
      - name: extra
        multus:
          networkName: bobnet
     domain:
       devices:
         interfaces:
         - name: extra
           macvtap: {}
       resources:
         limits:
           macvtap.network.kubevirt.io/vlan4: 1

When I change "bobnet" to "net1" in the NAD, I do it here as well, at which point it works.

And here's my macvtap config:

DP_MACVTAP_CONF: >-
    [ {
        "name" : "vlan1",
        "lowerDevice" : "vlan1",
        "mode": "bridge",
        "capacity" : 50
     },
     {
        "name" : "vlan4",
        "lowerDevice" : "vlan4",
        "mode": "bridge",
        "capacity" : 50
     },
     {
        "name" : "bond0",
        "lowerDevice" : "bond0",
        "mode": "bridge",
        "capacity" : 50
     } ]

Anything else we need to know?:

Note that it's still problematic if I leave the name out of the "config" -- it looks as though virt-handler is expecting an interface named according to the name of the NetworkAttachmentDefinition. But, something appears to be re-naming it net1, so it only works if that's actually the name of the NetworkAttachmentDefinition. Or, something like that -- I'm not sure how it's supposed to work.

This is on Kubernetes 1.23, Debian 11.

/dev/tap devices created on physical host rather than container when running in kind

What happened:

Trying to run macvtap-cni in a kind cluster results in the /dev/tap devices being created on the physical host rather than the kind container causing pod creation to fail.

What you expected to happen:

The /dev/tap devices to be created in the kind container and pod creation to succeed

How to reproduce it (as minimally and precisely as possible):

I created a reproducer using kind/tilt here: https://github.com/detiber/reproducer#reproducer

Anything else we need to know?:

macvtap-cni might terminate in case of stress listing / deleting devices

Create macvtap default empty config (so it will list all device types).

Create 100 bonds, and then delete them all (the delete should trigger the bug).
Increase to 500 if it doesn't happen easily, i managed to simulate it 3 times out of 3 with 500.

for i in {1..100}
do
  ip link add bond${i} type bond mode 802.3ad
done
for i in {1..100}
do
  ip link del bond${i} type bond mode 802.3ad
done

What happened:

The CNI container terminates and restarts due to fatal error.
(the pod won't look like it restarted, but -oyaml will show the container within actually did)

 - containerID: cri-o://5f2673b5c66f84d8dced72ba519d81be006a956241e37fee4cbdeb4127eb3c30
    image: quay.io/kubevirt/macvtap-cni@sha256:f20d5e56f8b8c1ab7e5a64e536b66f65aa688b2d1dc0b37e3c26c2af2b481266
    imageID: quay.io/kubevirt/macvtap-cni@sha256:f20d5e56f8b8c1ab7e5a64e536b66f65aa688b2d1dc0b37e3c26c2af2b481266
    lastState:
      terminated:
        containerID: cri-o://d4327c8e7b871e981b5eba0c997aabd275c871187034f0496188d28ff4e2b40a
        exitCode: 2
        finishedAt: "2021-06-30T09:33:22Z"
        reason: Error
        startedAt: "2021-06-30T08:13:17Z"

Reason

I0630 09:33:22.050285    6414 manager.go:127] Remove unused plugin "bond1"
I0630 09:33:22.050302    6414 plugin.go:162] bond1: Stopping plugin server
I0630 09:33:22.050307    6414 plugin.go:165] bond1: Tried to stop stopped DPI
fatal error: concurrent map iteration and map write

docker stat shown that the cni container consumed 100% CPU and then crashed.
(sometimes it just crashes without showing it)

What you expected to happen:
Should be stable.

How to reproduce it (as minimally and precisely as possible):
See above

Anything else we need to know?:

(possible that those are not relevant - just for consideration)
Maybe need to consider pagination ? (deal the work at batches)
Maybe need to limit the number of go routines so the CPU usage will be capped ?

Is it just a missing lock ? or Tried to stop stopped DPI means
there is something that was trying to be handled twice?

Full log of the dead container

support IPAM

in code implementation, it seems not to support IPAM field of the CNI configuration

does that make sense to support to call IPAM plugin ?

failed to lookup device "": Link not found

What happened:
Error according to readme failed to lookup device "": Link not found
What you expected to happen:

How to reproduce it (as minimally and precisely as possible):
follow the readme
Anything else we need to know?:

Events:
  Type     Reason                  Age   From               Message
  ----     ------                  ----  ----               -------
  Normal   Scheduled               18s   default-scheduler  Successfully assigned kube-system/pod to node2
  Normal   AddedInterface          15s   multus             Add eth0 [10.42.1.3/24] from cbr0
  Warning  FailedCreatePodSandBox  15s   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "5f2e05a02295f7216934f1620474032cf4064208b73d4794f49a98697dd2278a": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: '&{ContainerID:5f2e05a02295f7216934f1620474032cf4064208b73d4794f49a98697dd2278a Netns:/var/run/netns/cni-9dd43fba-4879-0542-60fc-940e67cbffb7 IfName:eth0 Args:K8S_POD_NAME=pod;K8S_POD_INFRA_CONTAINER_ID=5f2e05a02295f7216934f1620474032cf4064208b73d4794f49a98697dd2278a;K8S_POD_UID=8c20ba9c-b635-4e80-b128-7fda8b5e1800;IgnoreUnknown=1;K8S_POD_NAMESPACE=kube-system Path: StdinData:[123 34 99 97 112 97 98 105 108 105 116 105 101 115 34 58 123 34 112 111 114 116 77 97 112 112 105 110 103 115 34 58 116 114 117 101 125 44 34 99 104 114 111 111 116 68 105 114 34 58 34 47 104 111 115 116 114 111 111 116 34 44 34 99 108 117 115 116 101 114 78 101 116 119 111 114 107 34 58 34 47 104 111 115 116 47 101 116 99 47 99 110 105 47 110 101 116 46 100 47 49 48 45 102 108 97 110 110 101 108 46 99 111 110 102 108 105 115 116 34 44 34 99 110 105 67 111 110 102 105 103 68 105 114 34 58 34 47 104 111 115 116 47 101 116 99 47 99 110 105 47 110 101 116 46 100 34 44 34 99 110 105 86 101 114 115 105 111 110 34 58 34 48 46 51 46 49 34 44 34 108 111 103 76 101 118 101 108 34 58 34 118 101 114 98 111 115 101 34 44 34 108 111 103 84 111 83 116 100 101 114 114 34 58 116 114 117 101 44 34 109 117 108 116 117 115 65 117 116 111 99 111 110 102 105 103 68 105 114 34 58 34 47 104 111 115 116 47 101 116 99 47 99 110 105 47 110 101 116 46 100 34 44 34 109 117 108 116 117 115 67 111 110 102 105 103 70 105 108 101 34 58 34 97 117 116 111 34 44 34 110 97 109 101 34 58 34 109 117 108 116 117 115 45 99 110 105 45 110 101 116 119 111 114 107 34 44 34 115 111 99 107 101 116 68 105 114 34 58 34 47 104 111 115 116 47 114 117 110 47 109 117 108 116 117 115 47 34 44 34 116 121 112 101 34 58 34 109 117 108 116 117 115 45 115 104 105 109 34 125]} ContainerID:"5f2e05a02295f7216934f1620474032cf4064208b73d4794f49a98697dd2278a" Netns:"/var/run/netns/cni-9dd43fba-4879-0542-60fc-940e67cbffb7" IfName:"eth0" Args:"K8S_POD_NAME=pod;K8S_POD_INFRA_CONTAINER_ID=5f2e05a02295f7216934f1620474032cf4064208b73d4794f49a98697dd2278a;K8S_POD_UID=8c20ba9c-b635-4e80-b128-7fda8b5e1800;IgnoreUnknown=1;K8S_POD_NAMESPACE=kube-system" Path:"" ERRORED: error configuring pod [kube-system/pod] networking: [kube-system/pod/8c20ba9c-b635-4e80-b128-7fda8b5e1800:dataplane]: error adding container to network "dataplane": failed to lookup device "": Link not found
'

creating macvtap-cni pod failed

What happened:
I am based on README.md #https://github.com/kubevirt/macvtap-cni/blob/master/README.md to create a pod with macvtap-cni

k8s client version

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"archive", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"$Format:%H$", GitTreeState:"archive", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}

pod's k8s yaml
apiVersion: v1
kind: Pod
metadata:
name: samplepod2
namespace: kube-system
annotations:
k8s.v1.cni.cncf.io/networks: dataplane
spec:
containers:

  • name: busybox
    image: busybox
    command: ["/bin/sleep", "180000"]
    resources:
    limits:
    macvtap.network.kubevirt.io/dataplane: 1

describe pod

apiVersion: v1
kind: Pod
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: dataplane
creationTimestamp: "2020-07-31T11:07:21Z"
name: samplepod2
namespace: kube-system
resourceVersion: "108224341"
selfLink: /api/v1/namespaces/kube-system/pods/samplepod2
uid: 019d5a0d-d31e-11ea-90ec-525400a630d9
spec:
containers:

  • command:
    • /bin/sleep
    • "180000"
      image: busybox
      imagePullPolicy: Always
      name: busybox
      resources:
      limits:
      macvtap.network.kubevirt.io/dataplane: "1"
      requests:
      macvtap.network.kubevirt.io/dataplane: "1"
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
    • mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-kzr5c
      readOnly: true
      dnsPolicy: ClusterFirst
      enableServiceLinks: true
      priority: 0
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: default
      serviceAccountName: default
      terminationGracePeriodSeconds: 30
      tolerations:
  • effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  • effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
    volumes:
  • name: default-token-kzr5c
    secret:
    defaultMode: 420
    secretName: default-token-kzr5c
    status:
    conditions:
  • lastProbeTime: null
    lastTransitionTime: "2020-07-31T11:07:22Z"
    message: '0/25 nodes are available: 1 node(s) were unschedulable, 24 Insufficient
    macvtap.network.kubevirt.io/dataplane.'
    reason: Unschedulable
    status: "False"
    type: PodScheduled
    phase: Pending
    qosClass: BestEffort
    root@mgt01:~# kubectl describe pod -n kube-system samplepod2
    Name: samplepod2
    Namespace: kube-system
    Priority: 0
    PriorityClassName:
    Node:
    Labels:
    Annotations: k8s.v1.cni.cncf.io/networks: dataplane
    Status: Pending
    IP:
    Containers:
    busybox:
    Image: busybox
    Port:
    Host Port:
    Command:
    /bin/sleep
    180000
    Limits:
    macvtap.network.kubevirt.io/dataplane: 1
    Requests:
    macvtap.network.kubevirt.io/dataplane: 1
    Environment:
    Mounts:
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-kzr5c (ro)
    Conditions:
    Type Status
    PodScheduled False
    Volumes:
    default-token-kzr5c:
    Type: Secret (a volume populated by a Secret)
    SecretName: default-token-kzr5c
    Optional: false
    QoS Class: BestEffort
    Node-Selectors:
    Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
    node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
    Type Reason Age From Message

Warning FailedScheduling 92m (x168 over 3h17m) default-scheduler 0/25 nodes are available: 1 node(s) were unschedulable, 24 Insufficient macvtap.network.kubevirt.io/dataplane, 3 Insufficient pods.
Warning FailedScheduling 2m37s (x215 over 88m) default-scheduler 0/25 nodes are available: 1 node(s) were unschedulable, 24 Insufficient macvtap.network.kubevirt.io/dataplane.

What you expected to happen:

  1. Is the content of the readme document incomplete and some steps or configuration procedures missing
    2.help to create a macvtap-cni pod

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Trying to setup Macvtap

I'm currently in the progress of trying to setup Kubevirt with Macvtap, however there is a problem since the interface get's renamed and I don't know where this comes from.

Neither udev rules nor systemd are responsible for it, I have a feeling it comes from cloud-init since it's a cloud os and network is configured via it.

Either way I don't think "Mvp" should be the suffix of the interface, there are many default udev rules which target "interface*" and thus will include the Macvtap interface just as in my case.

// Interfaces will be named as <Name><suffix>[0-<Capacity>]
suffix = "Mvp"

I think it would be better if it was a prefix, this way the interfaces would be unique, e.g. <prefix>[0-<Capacity>]<Name> resulting in mvp83eth0, this would also be inline with other interface naming schemes like cali* for calico

Pod error log:
Warning FailedCreatePodSandBox 3m16s (x17 over 6m41s) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "eb51d46d32307adca3855b0a7a09b78091bc98caf5cac92752d043217187284f": plugin type="multus" name="multus-cni-network" failed (add): [opnsense/virt-launcher-opnsense-master-56sld/093ba240-603e-40ee-bd7b-49fc32c0c592:kubevirt]: error adding container to network "kubevirt": failed to lookup device "eth0Mvp83": Link not found

Node kernel log:
[Sun Mar 19 22:33:27 2023] eth0: renamed from eth0Mvp83

Essentially the interface got created but then renamed from eth0Mvp83 to eth0 and multus failed to find it.

Any plans for arm64 support?

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.