Git Product home page Git Product logo

kubeshark / kubeshark Goto Github PK

View Code? Open in Web Editor NEW
10.6K 71.0 440.0 26.25 MB

The API traffic analyzer for Kubernetes providing real-time K8s protocol-level visibility, capturing and monitoring all traffic and payloads going in, out and across containers, pods, nodes and clusters. Inspired by Wireshark, purposely built for Kubernetes

Home Page: https://kubeshark.co

License: Apache License 2.0

Go 92.28% Makefile 4.09% Shell 2.65% Smarty 0.98%
kubernetes microservices golang rest grpc amqp kafka redis go microservice

kubeshark's Introduction

Kubeshark: Traffic analyzer for Kubernetes.

GitHub Latest Release Docker pulls Image size Discord Slack

Want to see Kubeshark in action, right now? Visit this live demo deployment of Kubeshark.

Kubeshark is an API Traffic Analyzer for Kubernetes providing real-time, protocol-level visibility into Kubernetes’ internal network, capturing and monitoring all traffic and payloads going in, out and across containers, pods, nodes and clusters.

Simple UI

Think TCPDump and Wireshark re-invented for Kubernetes

Getting Started

Download Kubeshark's binary distribution latest release and run following one of these examples:

kubeshark tap
kubeshark tap -n sock-shop "(catalo*|front-end*)"

Running any of the ☝️ above commands will open the Web UI in your browser which streams the traffic in your Kubernetes cluster in real-time.

Homebrew

Homebrew 🍺 users install Kubeshark CLI with:

brew install kubeshark

Helm

Add the helm repository and install the chart:

helm repo add kubeshark https://helm.kubeshark.co
‍helm install kubeshark kubeshark/kubeshark

Building From Source

Clone this repository and run make command to build it. After the build is complete, the executable can be found at ./bin/kubeshark__.

Documentation

To learn more, read the documentation.

Contributing

We ❤️ pull requests! See CONTRIBUTING.md for the contribution guide.

Code of Conduct

This project is for everyone. We ask that our users and contributors take a few minutes to review our Code of Conduct.

kubeshark's People

Contributors

adamkol-up9 avatar adriang-90 avatar alongir avatar amitfainholts avatar atilsensalduz avatar batazor avatar chrisredwine avatar corest avatar dependabot[bot] avatar dvdlevanon avatar gadotroee avatar gustavomassa avatar haiut avatar igorgov avatar iluxa avatar imcezz avatar kindknow avatar ksudhir007 avatar leon-up9 avatar lirazyehezkel avatar mertyildiran avatar nimrod-up9 avatar ramiberm avatar royisland avatar seltonfiuza avatar suzuki-shunsuke avatar tgaliotto avatar tiptophelmet avatar undera avatar ziul avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubeshark's Issues

Allow specifying default filters on the web interface

It would be great if a default set of filters can be set when opening the interface.

If there are a lot of pods, there could be various urls which all of them have, but are not useful when debugging.

An example would be something like request.path != "/api/healthcheck".

ERROR: failed sending telemetry

Describe the bug
Mizu API server was not ready in time

To Reproduce
Steps to reproduce the behavior:

  1. Run mizu tap ".*" -n test

Screenshots
image

Desktop (please complete the following information):

  • OS: centos

Mizu API server was not ready in time

[root@k8s-master-xxx mizu]# ./mizu tap
Mizu will store up to 200MB of traffic, old traffic will be cleared once the limit is reached.
Tapping pods in namespaces "default"
+kali-roll-df47956b5-5mdtp
+my-cert-manager-cainjector-5955cd77f8-j5g5x
+my-cert-manager-ff65454bf-q547x
+my-cert-manager-webhook-5ff8499f89-jmrrj
+vault-7594bfbc57-vc4fx
Waiting for Mizu Agent to start...
Mizu API server was not ready in time

Removing mizu resources

Watching API server events loop, error: error in k8s watch: the server could not find the requested resource (get events.events.k8s.io)

Describe the bug
./mizu tap
Mizu will store up to 200MB of traffic, old traffic will be cleared once the limit is reached.
Tapping pods in namespaces "default"
Waiting for Mizu Agent to start...
Watching API server events loop, error: error in k8s watch: the server could not find the requested resource (get events.events.k8s.io)
Mizu API server was not ready in time

To Reproduce
Steps to reproduce the behavior:

  1. Run mizu tap

Desktop (please complete the following information):

  • OS: ubuntu18
  • kubernetes: 1.16.3

on AKS , tapper not running

Describe the bug
A clear and concise description of what the bug is.

I have a admin access, not seeing mizu tapper pod is running in any namespace

mizu mizu-api-server 2/2 Running 0 6m58s

log of the mizu-api-server looks good. please advise

To Reproduce
Steps to reproduce the behavior:

  1. Run mizu <command> ...
  2. Click on '...'
  3. Scroll down to '...'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Logs
Upload logs:

  1. Run the mizu command with --set dump-logs=true (e.g mizu tap --set dump-logs=true)
  2. Try to reproduce the issue
  3. CTRL+C on terminal tab which runs mizu
  4. Upload the logs zip file from ~/.mizu/mizu_logs_**.zip

Debug or dump logs doenst create a log

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. macOS]
  • Web Browser: [e.g. Google Chrome]

Additional context
Add any other context about the problem here.

Networkpolicy - Allow existing namespace

Describe the bug
When networkpolicies are activated, mizu does not work.

Also we cannot pre-setup networkpolicy inside mizu namespace because mizu refuse to start if the namespace mizu already exist.

To Reproduce
Steps to reproduce the behavior:

  1. Install any networkpolicies using calico
  2. Start mizu tap
  3. No traffic

Expected behavior
3 . See traffic

Attributing communication to wrong src

Describe the bug

I believe mizu might be labeling traffic from erroneous source, for example in my tap of kube-system overwhelming amounts of traffic say:

src.name == "kube-prometheus-stack-prometheus-node-exporter.monitoring" (for the record dest is unresolved)

the src ip being ip of the node itself (and not a pod ip), destination ip being just some pod, which is not the node exporter in monitoring ns, it was a kibana pod in a different ns. I assumed this is a healthcheck from kubelet, to kibana pod. why does it keep saying src is node exporter for many like these? I'm specifically running my tap on kube-system, even if the source truly is node-exporter i have 2 questions:

  • why is it picking those up? Im not tapping monitoring ns where they (node-exporter) are residing
  • why are they all being attributed to node exporter? I'm not seeing any non-node exporter health checks, arent they being done constantly by kubelet? (meaning isn't it attributing at least some of these checks to some service in monitoring wrongfully?) even if these are accurate id expect to see heath checks from other sources.

To Reproduce
Steps to reproduce the behavior:

I ran mizu tap for a given ns, in my case, kube-system
this was after installing kube-prometheus-stack, which provides node exporter but i think this is a red herring.

Expected behavior
src would be attributed to proper source, matching the ip.

Logs
Upload logs:

let me know if you want the logs but it appears mizu is working fine from this perspective and probably wont report anything out of the ordinary, i dont think logs will show what im talking about

Screenshots
heres an example screen shot

image

Desktop (please complete the following information):

  • OS: windows 10 (k8s running on linux)
  • Web Browser: edge

Additional context

apologies if im completely misreading my results of course.

Documentation broken URL for sensitiveDataFiltering

Describe the bug
I just came across mizu, I was reading the documentation over at https://getmizu.io/. In the Security section, the url for the redacted keyworkds is broken.

To Reproduce
Steps to reproduce the behavior:

  1. Visit https://getmizu.io/#:~:text=https%3A//github.com/up9inc/mizu/blob/develop/agent/pkg/sensitiveDataFiltering (chrome link to the url)
  2. Click on 'sensitiveDataFiltering`
  3. See error

Expected behavior
I would expect the link to work

Running mizu fails silently with no logs

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Run mizu tap or any variant of it which does NOT include mizu-resources-namespace and --namespaces
  2. See mizu terminate instantly with no logs or error output
  3. Run mizu tap --set mizu-resources-namespace=my-ns --namespaces=my-ns
  4. Mizu executes as expected
  5. Run mizu tap --set mizu-resources-namespace=mizu --namespaces=mizu
  6. See mizu terminate instantly with no logs or error output

I'm admin on this cluster, and have even gone so far as to create the mizu namespace ahead of time after it failed originally.

Expected behavior

mizu tap command works without mizu-resources-namespace and --namespaces command

Logs

WARNING: No zip logs generated, only CLI logs.

Screenshots

n/a

Desktop (please complete the following information):

  • OS: Darwin Kernel Version 20.6.0: Mon Aug 30 06:12:21 PDT 2021; root:xnu-7195.141.6~3/RELEASE_X86_64
  • Web Browser: n/a

Additional context
Add any other context about the problem here.

We do have an OPA policy which enforces a naming scheme on our namespaces unless a certain label is added. As such, I tried adding the mizu namespace manually with the partner: core label.

Additionally, the mac install instructions don't add mizu to your path, so I manually copied it to /usr/local/bin/mizu though the issue was happening even when executing in my home directory with ./mizu instead of just mizu

➜  ~ rm -rf .mizu
➜  ~ mizu tap --set mizu-resources-namespace=mizu --namespaces=mizu --set dump-logs=true
Mizu will store up to 200MB of traffic, old traffic will be cleared once the limit is reached.
Tapping pods in namespaces "mizu"
+bash
Waiting for Mizu Agent to start...
➜  ~ cat .mizu/mizu_cli.log
[2022-01-27T02:46:07.633+0000] DEBUG ▶ Checking for newer version... ▶ [57108 versionCheck.go:47 CheckNewerVersion]
[2022-01-27T02:46:07.633+0000] DEBUG ▶ Init config finished
 Final config: {
        "Tap": {
                "UploadIntervalSec": 10,
                "PodRegexStr": ".*",
                "GuiPort": 8899,
                "ProxyHost": "127.0.0.1",
                "Namespaces": [
                        "mizu"
                ],
                "Analysis": false,
                "AllNamespaces": false,
                "PlainTextFilterRegexes": null,
                "IgnoredUserAgents": null,
                "DisableRedaction": false,
                "HumanMaxEntriesDBSize": "200MB",
                "DryRun": false,
                "Workspace": "",
                "EnforcePolicyFile": "",
                "ContractFile": "",
                "AskUploadConfirmation": true,
                "ApiServerResources": {
                        "CpuLimit": "750m",
                        "MemoryLimit": "1Gi",
                        "CpuRequests": "50m",
                        "MemoryRequests": "50Mi"
                },
                "TapperResources": {
                        "CpuLimit": "750m",
                        "MemoryLimit": "1Gi",
                        "CpuRequests": "50m",
                        "MemoryRequests": "50Mi"
                },
                "ServiceMesh": false
        },
        "Version": {
                "DebugInfo": false
        },
        "View": {
                "GuiPort": 8899,
                "Url": ""
        },
        "Logs": {
                "FileStr": ""
        },
        "Auth": {
                "EnvName": "up9.app",
                "Token": ""
        },
        "Config": {
                "Regenerate": false
        },
        "AgentImage": "gcr.io/up9-docker-hub/mizu/main:0.22.0",
        "ImagePullPolicyStr": "Always",
        "MizuResourcesNamespace": "mizu",
        "Telemetry": true,
        "DumpLogs": true,
        "KubeConfigPathStr": "",
        "ConfigFilePath": "/Users/peter.dolkens/.mizu/config.yaml",
        "HeadlessMode": false,
        "LogLevelStr": "INFO",
        "ServiceMap": false,
        "OAS": false
}
 ▶ [57108 config.go:57 InitConfig]
[2022-01-27T02:46:07.633+0000] INFO  ▶ Mizu will store up to 200MB of traffic, old traffic will be cleared once the limit is reached. ▶ [57108 tap.go:82 func8]
[2022-01-27T02:46:07.633+0000] DEBUG ▶ Using kube config /Users/peter.dolkens/.kube/config ▶ [57108 provider.go:1055 loadKubernetesConfiguration]
[2022-01-27T02:46:08.017+0000] DEBUG ▶ successfully reported telemetry for cmd tap ▶ [57108 telemetry.go:36 ReportRun]
[2022-01-27T02:46:08.175+0000] INFO  ▶ Tapping pods in namespaces "mizu" ▶ [57108 tapRunner.go:116 RunMizuTap]
[2022-01-27T02:46:08.310+0000] INFO  ▶ +bash ▶ [57108 tapRunner.go:179 printTappedPodsPreview]
[2022-01-27T02:46:08.310+0000] DEBUG ▶ Finished version validation, github version 0.22.0, current version 0.22.0, took 676.551796ms ▶ [57108 versionCheck.go:95 CheckNewerVersion]
[2022-01-27T02:46:08.310+0000] INFO  ▶ Waiting for Mizu Agent to start... ▶ [57108 tapRunner.go:126 RunMizuTap]
➜  ~ ls .mizu
total 8
-rw-r--r--  1 peter.dolkens  staff  2466 Jan 27 02:46 mizu_cli.log
➜  ~ mizu tap datasync
Mizu will store up to 200MB of traffic, old traffic will be cleared once the limit is reached.
Tapping pods in namespaces "my-namespace"
+datasync-deploy-7d94dc6446-d9h5k
Waiting for Mizu Agent to start...
➜  ~ mizu tap --set mizu-resources-namespace=my-namespace --namespaces=my-namespace datasync
Mizu will store up to 200MB of traffic, old traffic will be cleared once the limit is reached.
Tapping pods in namespaces "my-namespace"
+datasync-deploy-5c65c9868c-c44pb
Waiting for Mizu Agent to start...
Mizu is available at http://localhost:8899
➜  ~ k get ns mizu -o yaml
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{"name":"mizu"},"labels":{"partner":"core"},"name":"mizu"}}
    name: mizu
  creationTimestamp: "2022-01-27T02:04:06Z"
  labels:
    kubernetes.io/metadata.name: mizu
    partner: core
  name: mizu
  resourceVersion: "175991464"
  uid: 078c01bb-9ef6-4f63-98aa-caefed0d2401
spec:
  finalizers:
  - kubernetes
status:
  phase: Active

Service Unavailable

Hi,
after installing mizu binary I run:
mizu tap ".*" -A

I can see the list of pods being tapped in the terminal.
I get:
Mizu is available at http://localhost:8899/mizu

However when I try to open that I get:
`{
"kind": "Status",
"apiVersion": "v1",
"metadata": {

},
"status": "Failure",
"message": "error trying to reach service: dial tcp 10.2.19.212:8899: i/o timeout",
"reason": "ServiceUnavailable",
"code": 503
}
`
kubectl is correctly working on my box.
I can also see the temporary mizu-collector pod running in the cluster.

Is it a port forward issue?

Error parsing cluster URL

Describe the bug
Mizu not able to parse cluster URL.
kubernetes port-forwarding error: error upgrading connection: error creating request: parse "https://rancher-test.net%2Fk8s%2Fclusters%2Fc-g59tp/api/v1/namespaces/mizu/pods/mizu-api-server/portforward": invalid URL escape "%2F"

To Reproduce
Steps to reproduce the behavior:

  1. Run mizu tap my-custom-srv-deployment-7bdbb449f5-z54lr

Expected behavior
Instance mizu UI displaying traffic.

Logs
mizu_logs_2022_02_12__18_55_49.zip

Desktop (please complete the following information):

  • OS: Linux 5.15.19-1-MANJARO SMP PREEMPT Tue Feb 1 16:58:17 UTC 2022 x86_64 GNU/Linux

Kubernetes fails to pull the image

Describe the bug
When trying to deploy mizu to kuberenetes cluster, kuberenetes fails to pull the image from ecr with error
container "mizu-api-server" in pod "mizu-api-server-7b9df6dcb9-j5ljp" is waiting to start: trying and failing to pull image
I tried to pull the image myself
docker pull 709825985650.dkr.ecr.us-east-1.amazonaws.com/up9/mizufree:0.21.29 Error response from daemon: Head "https://709825985650.dkr.ecr.us-east-1.amazonaws.com/v2/up9/mizufree/manifests/0.21.29": no basic auth credentials

To Reproduce
Steps to reproduce the behavior:

  1. Deploy mizu using helm chart from https://github.com/up9inc/mizu/tree/main/deploy/kubernetes/helm-chart

Links to images
mizuAgent: image: repository: "709825985650.dkr.ecr.us-east-1.amazonaws.com/up9/mizufree" tag: "0.21.29" tapper: image: repository: "709825985650.dkr.ecr.us-east-1.amazonaws.com/up9/mizufree" tag: "0.21.29"

Dont get traffic from my pods

Describe the bug
My cluster have different name than cluster.local and I cant find in mizu docs a way to set the correct cluster name. This is the log from one of the tapper daemons:

[2022-04-13T06:41:33.986+0000] INFO  ▶ socket connection to ws://mizu-api-server.mizu.svc.cluster.local/wsTapper failed: dial tcp: i/o timeout, retrying 28 out of 30 in 2 seconds... ▶ [1 main.go:323 dialSocketWithRetry]

If the cluster name is not properly set than the generated url ws://mizu-api-server.mizu.svc.cluster.local/wsTappe is not valid and the connection cannot be made.

Expected behavior
There should e option to mizu like --cluster-name that sets the correct cluster name.

Please add --cluster-name or similar option to mizu. If there is another way to set this name please put it in the help docs.

xdg-open does not exist in Linux WSL2 installations. Detect and use wslview instead

Describe the bug
xdg-open does not exist in Linux WSL2 installations. Detect this and use wslview instead.
http://manpages.ubuntu.com/manpages/impish/man1/wslview.1.html

To Reproduce
Steps to reproduce the behavior:

  1. Run mizu tap -A and observe the "xdg-open" error message. A browser is not automatically opened

Expected behavior
Detect and use wslview in Linux WSL cases. It will open the URL using the default browser in windows.

Logs

Waiting for Mizu Agent to start...
Mizu is available at http://localhost:8899
error while opening browser, exec: "xdg-open": executable file not found in $PATH

Desktop (please complete the following information):

  • OS: Windows 11 - Ubuntu 20.04 running on WSL
  • Web Browser: Edge

I worked around this by creating a symbolic link from xdg-open to wslview and everything worked as expected.

sudo ln -s $(which wslview) /usr/local/bin/xdg-open

Linux on WSL will always have "microsoft" and "WSL" in the kernel version.

uname -a
Linux someHost 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

Perhaps this GO module would help get the required information.
https://github.com/matishsiao/goInfo

Couldn't connect to API server, for more info check logs

I am not able to launch mizu as whenever I do so, the mizu-api-server goes into crashloopbackoff state with the below error.
It gives, Failed to get token, Get https://trcc.up9.app/anynonymous/token.

image

The K8s version in my environment is 1.20.6
Moreover, when I ran mizu on my mac terminal (k8s version: 1.21.2), mizu-api-server pod yaml that got generated as a result was different from what got while running it in private environment. And on mac terminal its running fine.

In the earlier case it is using the projected volume to obtain the token, but in latter it is directly giving the reference to the service account token using which it is trying to reach some external URL it seems.
MAC mizu api pod for token reference:
image

Private env api pod for token reference:
image

As a workaround to fix the mizu-api-server pod, I referenced the token using the projected volume and it stabilized the api server and I was able to access MIZU GUI as well.
But due to this, I had to terminate the earlier instance of mizu api server and launch a new one due to which on mizu GUI I wasn't able to see any tapped pods.

My doubt is that it should be generating the similar api server pod yaml as it was giving on mac terminal, it shouldn't try to reach external URL for fetching the token.
And is there any functionality using which I can keep running on the mizu api pod and can tap the pods later.

I have attached the mizu_cli.log as reference.
Let me know in case any other details are needed.

Below is the config file.
image

mizu_cli.log
mizu_events.log

Unable to tap specified pods

Describe the bug
Mizu is not able to tap specified pods but still capturing some traffic regarding health and monitoring requests.
Also mizu client logs following output:
"the server could not find the requested resource (get events.events.k8s.io)"
whilst
kubectl get events.events.k8s.io
is able to get all events.

To Reproduce
Run mizu tap my-domain; but it can't be reproduced outside proper VPN because target k8s cluster is private.

Expected behavior
Expected mizu to capture and display traffic for requested pods after manually sending HTTP requests to target pods.
Expected mizu to be able to resolve same resources as kubectl.

Logs
mizu_logs.zip

Screenshots
up9taperror

Desktop (please complete the following information):

  • OS: Linux 5.15.25-1-MANJARO SMP PREEMPT Wed Feb 23 14:44:03 UTC 2022 x86_64 GNU/Linux
  • Web Browser: Google Chrome

Additional context
Target cluster have istio service mesh installed.

Add support to KUBECONFIG env var

kubectl supports a KUBECONFIG env var that allows us to use other kubeconfig file instead of default $HOME/.kube/config

It would be good to support it as well

Tapping failed on EKS Fargate

HI,all.

I tried to use mizu on EKS Fargate.
But streaming did not appeared all.

So , I tried to use same mizu on EKS EC2 . And it worked fine.

Is there some right way to use mizu at EKS Fargate?

If mizu does not suuport EKS Fargate , please tell me ,are there any plans to support EKS Fargate?

Traffic validation for missing values

Describe the bug
How would I go about creating a traffic validation rule checking for a header that is missing ?
That would be a good way to validate authorization everywhere etc...

To Reproduce
Steps to reproduce the behavior:

  1. Run mizu <command> ...
  2. Click on '...'
  3. Scroll down to '...'
  4. See error

Expected behavior
I'd just like to declare a negative rule, to check for a missing header/json value ?

Logs
Upload logs:

  1. Run the mizu command with --set dump-logs=true (e.g mizu tap --set dump-logs=true)
  2. Try to reproduce the issue
  3. CTRL+C on terminal tab which runs mizu
  4. Upload the logs zip file from ~/.mizu/mizu_logs_**.zip

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. macOS]
  • Web Browser: [e.g. Google Chrome]

Additional context
Add any other context about the problem here.

Mizu API server was not ready in time

Describe the bug
Mizu API server was not ready in time.

To Reproduce
Steps to reproduce the behavior:

  1. Run mizu tap kieserver-proxy-7b6c685f44-4hdpv -n rule-ns

Expected behavior
Can work normally.

Screenshots
image

Desktop (please complete the following information):

Don't get traffic from my pods

Describe the bug
I tried to use mizu but I can't get traffic fron my pod

To Reproduce
Steps to reproduce the behavior:

  1. I run Mizu with the next command
    ./mizu tap "ggn.*" -n ggng-pe-prod --set dump-logs=true
    Mizu will store up to 200MB of traffic, old traffic will be cleared once the limit is reached.
    Tapping pods in namespaces "ggng-pe-prod"
    +ggng-api-listener-68d6859767-67psh
    +ggng-api-listener-68d6859767-bm428
    +ggng-api-listener-68d6859767-cl9rc
    +ggng-api-listener-68d6859767-cw9j6
    +ggng-api-listener-68d6859767-hll78
    +ggng-api-listener-68d6859767-njj2f
    +ggng-api-listener-68d6859767-szxtr
    +ggng-api-listener-68d6859767-vsxrq
    +ggng-api-reconcilier-5c4b5c8f4d-hnp5f
    +ggng-decision-engine-87999c4f4-2k4h8
    +ggng-decision-engine-87999c4f4-5rjvs
    +ggng-decision-engine-87999c4f4-6rbpj
    +ggng-decision-engine-87999c4f4-898d6
    +ggng-decision-engine-87999c4f4-8hfm2
    +ggng-decision-engine-87999c4f4-b8wbz
    +ggng-decision-engine-87999c4f4-cbp6g
    +ggng-decision-engine-87999c4f4-ctxlg
    +ggng-decision-engine-87999c4f4-dj9wn
    +ggng-decision-engine-87999c4f4-hq25h
    +ggng-decision-engine-87999c4f4-kglsg
    +ggng-decision-engine-87999c4f4-nl7mn
    +ggng-decision-engine-87999c4f4-pwbtc
    +ggng-decision-engine-87999c4f4-rrnpq
    +ggng-decision-engine-87999c4f4-vxskz
    +ggng-decision-engine-87999c4f4-x5njk
    +ggng-decision-engine-87999c4f4-xslrh
    +ggng-decision-engine-87999c4f4-z4mpd
    +ggng-encoder-gateway0-6687455844-654sp
    +ggng-encoder-gateway1-6fff5d4ffb-z6c2l
    +ggng-encoder-gateway2-8876f6476-9h2sx
    +ggng-encoder-gateway3-84764bbd64-kgpcf
    +ggng-encoder-gateway4-5c855b6bfb-4n5m2
    +ggng-encoder-gateway5-f5c4b659b-vhpzd
    +ggng-encoder-gateway6-566b78848d-fx9sk
    +ggng-encoder-gateway7-6647f5579b-bctjb
    +ggng-encoder-gateway8-577d568fd8-wgc9z
    Waiting for Mizu Agent to start...
    Mizu is available at http://localhost:8899

Screenshots
image

mizu
mizu_command
mizu_logs_2022_03_30__23_27_33.zip

Desktop (please complete the following information):

  • OS: [Fedora 35]
  • Web Browser: [Google Chrome]

Additional context

An error is reported during deployment

When accessing the web interface, an error panic is reported. There is no operation, but the mouse clicks

`[2022-01-23T14:12:27.728+0000] INFO ▶ Websocket event - Browser socket connected, socket ID: 10 ▶ [1 socket_server_handlers.go:35 WebSocketConnect]
2022/01/23 14:12:27 Reached EOF on server connection.
[2022-01-23T14:12:32.211+0000] INFO ▶ Websocket event - Browser socket disconnected, socket ID: 10 ▶ [1 socket_server_handlers.go:47 WebSocketDisconnect]
2022/01/23 14:12:32 Reached EOF on server connection.
2022/01/23 14:12:32 http: response.Write on hijacked connection from github.com/gin-gonic/gin.(*responseWriter).Write (response_writer.go:78)

2022/01/23 14:12:32 [Recovery] 2022/01/23 - 14:12:32 panic recovered:
http: connection has been hijacked
/go/pkg/mod/github.com/gin-gonic/[email protected]/render/json.go:56 (0x21165a6)
/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:927 (0x211c248)
/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:970 (0x213527e)
/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:199 (0x213521e)
/app/agent-build/pkg/middlewares/requiresAuth.go:27 (0x21351ac)
/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:168 (0x2134b82)
/app/agent-build/pkg/middlewares/cors.go:17 (0x2134b69)
/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:168 (0x216fe5b)
/app/agent-build/main.go:283 (0x216fe42)
/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:168 (0x212e179)
/go/pkg/mod/github.com/gin-gonic/[email protected]/recovery.go:99 (0x212e160)
/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:168 (0x212d253)
/go/pkg/mod/github.com/gin-gonic/[email protected]/logger.go:241 (0x212d212)
/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:168 (0x2122eaf)
/go/pkg/mod/github.com/gin-gonic/[email protected]/gin.go:555 (0x2122e95)
/go/pkg/mod/github.com/gin-gonic/[email protected]/gin.go:511 (0x212294a)
/usr/local/go/src/net/http/server.go:2868 (0xf3b222)
/usr/local/go/src/net/http/server.go:1933 (0xf3664c)
/usr/local/go/src/runtime/asm_amd64.s:1371 (0xc6e700)`

Minikube - CrashloppBackOff

Describe the bug
mizu-tapper "CrashLoopBackOff"

Logs
Upload logs:
logs-minikube

Desktop (please complete the following information):

  • OS: [ macOS - Apple Silicon]
  • Web Browser: [Google Chrome]

Couldn't find the kube config file, or file is empty

Describe the bug
Mizu claims it can't find the kube config file at the stated location but it's there.

To Reproduce
Steps to reproduce the behavior:

  1. Run mizu tap <pod-name>
  2. See error: "Couldn't find the kube config file, or file is empty (/path/to/.kube/config)"

Expected behavior
No error since the config file exists

Logs
There are no logs / zip file created when running the mizu command with --set dump-logs=true

Desktop (please complete the following information):

  • OS: macOS 11.6.4

Additional context

Here is the general structure of my config file:

apiVersion: v1
clusters:
- cluster:
    server: <server>
  name: ab
- cluster:
    server: <server>
  name: ac
- cluster:
    server: <server>
  name: ca
- cluster:
    server: <server>
  name: db
- cluster:
    server: <server>
  name: dt
- cluster:
    server: <server>
  name: et
contexts:
- context:
    cluster: ab
    user: iap
  name: ab
- context:
    cluster: ac
    user: iap
  name: ac
- context:
    cluster: ca
    user: iap
  name: ca
- context:
    cluster: db
    user: iap
  name: db
- context:
    cluster: dt
    user: iap
  name: dt
- context:
    cluster: et
    user: iap
  name: et
current-context: ""
kind: Config
preferences: {}
users:
- name: iap
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
...

Possibility to set imagePullSecrets

Describe the bug
My Kubernetes cluster have retrictions. I can only use Docker image on a corporate private registry, protected with credentials.
When mizu pod is deployed in Kubernetes, pod is in error:

Error: ErrImagePull
Failed to pull image "<corporate-registry>/up9inc/mizu:30.4": rpc error: code = Unknown desc = Error response from daemon: unauthorized: The client does not have permission for manifest

Expected behavior
Could you provide a parameter in config.yaml to set imagePullSecrets ?

Keep crashing when running agains a cluster with gitlab running inside

Describe the bug
This is the first time I'm using mizu, so trying it in a nonprod gitlab toolbox environment.
This cluster only contains gitlab instance running in gitlab-system namespace with default gitlab components installed.
It was running ok for like a min or 2, then crashed with error in the log as following:

[2022-05-17T00:21:27.369+0000] INFO  ▶ setting 172.31.17.243:2379=etcd-cluster.default ▶ [1 resolver.go:178 saveResolvedName]
[2022-05-17T00:21:27.369+0000] INFO  ▶ setting 172.31.36.21=etcd-cluster.default ▶ [1 resolver.go:178 saveResolvedName]
[2022-05-17T00:21:27.369+0000] INFO  ▶ setting 172.31.36.21:2379=etcd-cluster.default ▶ [1 resolver.go:178 saveResolvedName]
[2022-05-17T00:21:27.627+0000] INFO  ▶ setting 172.31.15.87=mizu-api-server.mizu ▶ [1 resolver.go:178 saveResolvedName]
[2022-05-17T00:21:27.627+0000] INFO  ▶ setting 172.31.15.87:8899=mizu-api-server.mizu ▶ [1 resolver.go:178 saveResolvedName]
[2022-05-17T00:21:27.639+0000] ERROR ▶ Failed to parse entry URL: http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (
  container_memory_working_set_bytes{namespace="gitlab-system",pod=~"gitlab-gitlab-shell-7444c568f8-7ghln|gitlab-gitlab-shell-7444c568f8-nrnk8",container!="",pod!=""}
)
&time=1652746843.256, err: parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  container_memory_working_set_bytes{namespace=\"gitlab-system\",pod=~\"gitlab-gitlab-shell-7444c568f8-7ghln|gitlab-gitlab-shell-7444c568f8-nrnk8\",container!=\"\",pod!=\"\"}\n)\n&time=1652746843.256": net/url: invalid control character in URL ▶ [1 oas_generator.go:191 getGen]
[2022-05-17T00:21:27.639+0000] ERROR ▶ Failed to parse entry URL: http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (
  container_memory_working_set_bytes{namespace="gitlab-system",pod=~"gitlab-gitlab-shell-7444c568f8-7ghln|gitlab-gitlab-shell-7444c568f8-nrnk8",container!="",pod!=""}
)
&time=1652746843.256, err: parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  container_memory_working_set_bytes{namespace=\"gitlab-system\",pod=~\"gitlab-gitlab-shell-7444c568f8-7ghln|gitlab-gitlab-shell-7444c568f8-nrnk8\",container!=\"\",pod!=\"\"}\n)\n&time=1652746843.256": net/url: invalid control character in URL ▶ [1 oas_generator.go:191 getGen]
[2022-05-17T00:21:27.641+0000] WARNING ▶ Failed processing entry %!d(string=000000000000000000000330): parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  container_memory_working_set_bytes{namespace=\"gitlab-system\",pod=~\"gitlab-gitlab-shell-7444c568f8-7ghln|gitlab-gitlab-shell-7444c568f8-nrnk8\",container!=\"\",pod!=\"\"}\n)\n&time=1652746843.256": net/url: invalid control character in URL ▶ [1 oas_generator.go:181 handleHARWithSource]
[2022-05-17T00:21:27.641+0000] ERROR ▶ Failed to parse entry URL: http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (
  irate (
      container_cpu_usage_seconds_total{namespace="gitlab-system",pod=~"gitlab-gitlab-shell-7444c568f8-7ghln|gitlab-gitlab-shell-7444c568f8-nrnk8",container!="",pod!=""}[120s]
  )
)
&time=1652746843.256, err: parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  irate (\n      container_cpu_usage_seconds_total{namespace=\"gitlab-system\",pod=~\"gitlab-gitlab-shell-7444c568f8-7ghln|gitlab-gitlab-shell-7444c568f8-nrnk8\",container!=\"\",pod!=\"\"}[120s]\n  )\n)\n&time=1652746843.256": net/url: invalid control character in URL ▶ [1 oas_generator.go:191 getGen]
[2022-05-17T00:21:27.641+0000] ERROR ▶ Failed to parse entry URL: http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (
  irate (
      container_cpu_usage_seconds_total{namespace="gitlab-system",pod=~"gitlab-gitlab-shell-7444c568f8-7ghln|gitlab-gitlab-shell-7444c568f8-nrnk8",container!="",pod!=""}[120s]
  )
)
&time=1652746843.256, err: parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  irate (\n      container_cpu_usage_seconds_total{namespace=\"gitlab-system\",pod=~\"gitlab-gitlab-shell-7444c568f8-7ghln|gitlab-gitlab-shell-7444c568f8-nrnk8\",container!=\"\",pod!=\"\"}[120s]\n  )\n)\n&time=1652746843.256": net/url: invalid control character in URL ▶ [1 oas_generator.go:191 getGen]
[2022-05-17T00:21:27.641+0000] WARNING ▶ Failed processing entry %!d(string=000000000000000000000331): parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  irate (\n      container_cpu_usage_seconds_total{namespace=\"gitlab-system\",pod=~\"gitlab-gitlab-shell-7444c568f8-7ghln|gitlab-gitlab-shell-7444c568f8-nrnk8\",container!=\"\",pod!=\"\"}[120s]\n  )\n)\n&time=1652746843.256": net/url: invalid control character in URL ▶ [1 oas_generator.go:181 handleHARWithSource]
[2022-05-17T00:21:27.642+0000] ERROR ▶ Failed to parse entry URL: http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (
  container_memory_working_set_bytes{namespace="gitlab-system",pod="gitlab-sidekiq-all-in-1-v2-69786f47f5-lnkfm",container!="",pod!=""}
)
&time=1652746843.282, err: parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  container_memory_working_set_bytes{namespace=\"gitlab-system\",pod=\"gitlab-sidekiq-all-in-1-v2-69786f47f5-lnkfm\",container!=\"\",pod!=\"\"}\n)\n&time=1652746843.282": net/url: invalid control character in URL ▶ [1 oas_generator.go:191 getGen]
[2022-05-17T00:21:27.642+0000] ERROR ▶ Failed to parse entry URL: http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (
  container_memory_working_set_bytes{namespace="gitlab-system",pod="gitlab-sidekiq-all-in-1-v2-69786f47f5-lnkfm",container!="",pod!=""}
)
&time=1652746843.282, err: parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  container_memory_working_set_bytes{namespace=\"gitlab-system\",pod=\"gitlab-sidekiq-all-in-1-v2-69786f47f5-lnkfm\",container!=\"\",pod!=\"\"}\n)\n&time=1652746843.282": net/url: invalid control character in URL ▶ [1 oas_generator.go:191 getGen]
[2022-05-17T00:21:27.642+0000] WARNING ▶ Failed processing entry %!d(string=000000000000000000000332): parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  container_memory_working_set_bytes{namespace=\"gitlab-system\",pod=\"gitlab-sidekiq-all-in-1-v2-69786f47f5-lnkfm\",container!=\"\",pod!=\"\"}\n)\n&time=1652746843.282": net/url: invalid control character in URL ▶ [1 oas_generator.go:181 handleHARWithSource]
[2022-05-17T00:21:27.643+0000] ERROR ▶ Failed to parse entry URL: http://prometheus-k8s.monitoring.svc:9090/api/v1/query?time=1652746843.282&query=sum by (pod,container) (
  irate (
      container_cpu_usage_seconds_total{namespace="gitlab-system",pod="gitlab-sidekiq-all-in-1-v2-69786f47f5-lnkfm",container!="",pod!=""}[120s]
  )
)
, err: parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?time=1652746843.282&query=sum by (pod,container) (\n  irate (\n      container_cpu_usage_seconds_total{namespace=\"gitlab-system\",pod=\"gitlab-sidekiq-all-in-1-v2-69786f47f5-lnkfm\",container!=\"\",pod!=\"\"}[120s]\n  )\n)\n": net/url: invalid control character in URL ▶ [1 oas_generator.go:191 getGen]
[2022-05-17T00:21:27.643+0000] ERROR ▶ Failed to parse entry URL: http://prometheus-k8s.monitoring.svc:9090/api/v1/query?time=1652746843.282&query=sum by (pod,container) (
  irate (
      container_cpu_usage_seconds_total{namespace="gitlab-system",pod="gitlab-sidekiq-all-in-1-v2-69786f47f5-lnkfm",container!="",pod!=""}[120s]
  )
)
, err: parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?time=1652746843.282&query=sum by (pod,container) (\n  irate (\n      container_cpu_usage_seconds_total{namespace=\"gitlab-system\",pod=\"gitlab-sidekiq-all-in-1-v2-69786f47f5-lnkfm\",container!=\"\",pod!=\"\"}[120s]\n  )\n)\n": net/url: invalid control character in URL ▶ [1 oas_generator.go:191 getGen]
[2022-05-17T00:21:27.643+0000] WARNING ▶ Failed processing entry %!d(string=000000000000000000000333): parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?time=1652746843.282&query=sum by (pod,container) (\n  irate (\n      container_cpu_usage_seconds_total{namespace=\"gitlab-system\",pod=\"gitlab-sidekiq-all-in-1-v2-69786f47f5-lnkfm\",container!=\"\",pod!=\"\"}[120s]\n  )\n)\n": net/url: invalid control character in URL ▶ [1 oas_generator.go:181 handleHARWithSource]
[2022-05-17T00:21:27.644+0000] ERROR ▶ Failed to parse entry URL: http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (
  container_memory_working_set_bytes{namespace="gitlab-system",pod=~"gitlab-webservice-default-864574fbd7-bjz6x|gitlab-webservice-default-864574fbd7-x8bn2",container!="",pod!=""}
)
&time=1652746843.292, err: parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  container_memory_working_set_bytes{namespace=\"gitlab-system\",pod=~\"gitlab-webservice-default-864574fbd7-bjz6x|gitlab-webservice-default-864574fbd7-x8bn2\",container!=\"\",pod!=\"\"}\n)\n&time=1652746843.292": net/url: invalid control character in URL ▶ [1 oas_generator.go:191 getGen]
[2022-05-17T00:21:27.644+0000] ERROR ▶ Failed to parse entry URL: http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (
  container_memory_working_set_bytes{namespace="gitlab-system",pod=~"gitlab-webservice-default-864574fbd7-bjz6x|gitlab-webservice-default-864574fbd7-x8bn2",container!="",pod!=""}
)
&time=1652746843.292, err: parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  container_memory_working_set_bytes{namespace=\"gitlab-system\",pod=~\"gitlab-webservice-default-864574fbd7-bjz6x|gitlab-webservice-default-864574fbd7-x8bn2\",container!=\"\",pod!=\"\"}\n)\n&time=1652746843.292": net/url: invalid control character in URL ▶ [1 oas_generator.go:191 getGen]
[2022-05-17T00:21:27.644+0000] WARNING ▶ Failed processing entry %!d(string=000000000000000000000334): parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  container_memory_working_set_bytes{namespace=\"gitlab-system\",pod=~\"gitlab-webservice-default-864574fbd7-bjz6x|gitlab-webservice-default-864574fbd7-x8bn2\",container!=\"\",pod!=\"\"}\n)\n&time=1652746843.292": net/url: invalid control character in URL ▶ [1 oas_generator.go:181 handleHARWithSource]
[2022-05-17T00:21:27.645+0000] ERROR ▶ Failed to parse entry URL: http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (
  irate (
      container_cpu_usage_seconds_total{namespace="gitlab-system",pod=~"gitlab-webservice-default-864574fbd7-bjz6x|gitlab-webservice-default-864574fbd7-x8bn2",container!="",pod!=""}[120s]
  )
)
&time=1652746843.292, err: parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  irate (\n      container_cpu_usage_seconds_total{namespace=\"gitlab-system\",pod=~\"gitlab-webservice-default-864574fbd7-bjz6x|gitlab-webservice-default-864574fbd7-x8bn2\",container!=\"\",pod!=\"\"}[120s]\n  )\n)\n&time=1652746843.292": net/url: invalid control character in URL ▶ [1 oas_generator.go:191 getGen]
[2022-05-17T00:21:27.645+0000] ERROR ▶ Failed to parse entry URL: http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (
  irate (
      container_cpu_usage_seconds_total{namespace="gitlab-system",pod=~"gitlab-webservice-default-864574fbd7-bjz6x|gitlab-webservice-default-864574fbd7-x8bn2",container!="",pod!=""}[120s]
  )
)
&time=1652746843.292, err: parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  irate (\n      container_cpu_usage_seconds_total{namespace=\"gitlab-system\",pod=~\"gitlab-webservice-default-864574fbd7-bjz6x|gitlab-webservice-default-864574fbd7-x8bn2\",container!=\"\",pod!=\"\"}[120s]\n  )\n)\n&time=1652746843.292": net/url: invalid control character in URL ▶ [1 oas_generator.go:191 getGen]
[2022-05-17T00:21:27.646+0000] WARNING ▶ Failed processing entry %!d(string=000000000000000000000335): parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  irate (\n      container_cpu_usage_seconds_total{namespace=\"gitlab-system\",pod=~\"gitlab-webservice-default-864574fbd7-bjz6x|gitlab-webservice-default-864574fbd7-x8bn2\",container!=\"\",pod!=\"\"}[120s]\n  )\n)\n&time=1652746843.292": net/url: invalid control character in URL ▶ [1 oas_generator.go:181 handleHARWithSource]
[2022-05-17T00:21:27.646+0000] ERROR ▶ Failed to parse entry URL: http://prometheus-k8s.monitoring.svc:9090/api/v1/query?time=1652746843.315&query=sum by (pod,container) (
  irate (
      container_cpu_usage_seconds_total{namespace="gitlab-system",pod="gitlab-gitlab-pages-b5cdc49c6-ghdrg",container!="",pod!=""}[120s]
  )
)
, err: parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?time=1652746843.315&query=sum by (pod,container) (\n  irate (\n      container_cpu_usage_seconds_total{namespace=\"gitlab-system\",pod=\"gitlab-gitlab-pages-b5cdc49c6-ghdrg\",container!=\"\",pod!=\"\"}[120s]\n  )\n)\n": net/url: invalid control character in URL ▶ [1 oas_generator.go:191 getGen]
[2022-05-17T00:21:27.646+0000] ERROR ▶ Failed to parse entry URL: http://prometheus-k8s.monitoring.svc:9090/api/v1/query?time=1652746843.315&query=sum by (pod,container) (
  irate (
      container_cpu_usage_seconds_total{namespace="gitlab-system",pod="gitlab-gitlab-pages-b5cdc49c6-ghdrg",container!="",pod!=""}[120s]
  )
)
, err: parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?time=1652746843.315&query=sum by (pod,container) (\n  irate (\n      container_cpu_usage_seconds_total{namespace=\"gitlab-system\",pod=\"gitlab-gitlab-pages-b5cdc49c6-ghdrg\",container!=\"\",pod!=\"\"}[120s]\n  )\n)\n": net/url: invalid control character in URL ▶ [1 oas_generator.go:191 getGen]
[2022-05-17T00:21:27.647+0000] WARNING ▶ Failed processing entry %!d(string=000000000000000000000336): parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?time=1652746843.315&query=sum by (pod,container) (\n  irate (\n      container_cpu_usage_seconds_total{namespace=\"gitlab-system\",pod=\"gitlab-gitlab-pages-b5cdc49c6-ghdrg\",container!=\"\",pod!=\"\"}[120s]\n  )\n)\n": net/url: invalid control character in URL ▶ [1 oas_generator.go:181 handleHARWithSource]
[2022-05-17T00:21:27.647+0000] ERROR ▶ Failed to parse entry URL: http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (
  container_memory_working_set_bytes{namespace="gitlab-system",pod="gitlab-gitlab-pages-b5cdc49c6-ghdrg",container!="",pod!=""}
)
&time=1652746843.315, err: parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  container_memory_working_set_bytes{namespace=\"gitlab-system\",pod=\"gitlab-gitlab-pages-b5cdc49c6-ghdrg\",container!=\"\",pod!=\"\"}\n)\n&time=1652746843.315": net/url: invalid control character in URL ▶ [1 oas_generator.go:191 getGen]
[2022-05-17T00:21:27.647+0000] ERROR ▶ Failed to parse entry URL: http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (
  container_memory_working_set_bytes{namespace="gitlab-system",pod="gitlab-gitlab-pages-b5cdc49c6-ghdrg",container!="",pod!=""}
)
&time=1652746843.315, err: parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  container_memory_working_set_bytes{namespace=\"gitlab-system\",pod=\"gitlab-gitlab-pages-b5cdc49c6-ghdrg\",container!=\"\",pod!=\"\"}\n)\n&time=1652746843.315": net/url: invalid control character in URL ▶ [1 oas_generator.go:191 getGen]
[2022-05-17T00:21:27.647+0000] WARNING ▶ Failed processing entry %!d(string=000000000000000000000337): parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  container_memory_working_set_bytes{namespace=\"gitlab-system\",pod=\"gitlab-gitlab-pages-b5cdc49c6-ghdrg\",container!=\"\",pod!=\"\"}\n)\n&time=1652746843.315": net/url: invalid control character in URL ▶ [1 oas_generator.go:181 handleHARWithSource]
[2022-05-17T00:21:27.649+0000] ERROR ▶ Failed to parse entry URL: http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (
  irate (
      container_cpu_usage_seconds_total{namespace="gitlab-system",pod=~"gitlab-gitlab-shell-7444c568f8-7ghln|gitlab-gitlab-shell-7444c568f8-nrnk8",container!="",pod!=""}[120s]
  )
)
&time=1652746843.256, err: parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  irate (\n      container_cpu_usage_seconds_total{namespace=\"gitlab-system\",pod=~\"gitlab-gitlab-shell-7444c568f8-7ghln|gitlab-gitlab-shell-7444c568f8-nrnk8\",container!=\"\",pod!=\"\"}[120s]\n  )\n)\n&time=1652746843.256": net/url: invalid control character in URL ▶ [1 oas_generator.go:191 getGen]
[2022-05-17T00:21:27.649+0000] ERROR ▶ Failed to parse entry URL: http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (
  irate (
      container_cpu_usage_seconds_total{namespace="gitlab-system",pod=~"gitlab-gitlab-shell-7444c568f8-7ghln|gitlab-gitlab-shell-7444c568f8-nrnk8",container!="",pod!=""}[120s]
  )
)
&time=1652746843.256, err: parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  irate (\n      container_cpu_usage_seconds_total{namespace=\"gitlab-system\",pod=~\"gitlab-gitlab-shell-7444c568f8-7ghln|gitlab-gitlab-shell-7444c568f8-nrnk8\",container!=\"\",pod!=\"\"}[120s]\n  )\n)\n&time=1652746843.256": net/url: invalid control character in URL ▶ [1 oas_generator.go:191 getGen]
[2022-05-17T00:21:27.650+0000] WARNING ▶ Failed processing entry %!d(string=000000000000000000000343): parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  irate (\n      container_cpu_usage_seconds_total{namespace=\"gitlab-system\",pod=~\"gitlab-gitlab-shell-7444c568f8-7ghln|gitlab-gitlab-shell-7444c568f8-nrnk8\",container!=\"\",pod!=\"\"}[120s]\n  )\n)\n&time=1652746843.256": net/url: invalid control character in URL ▶ [1 oas_generator.go:181 handleHARWithSource]
[2022-05-17T00:21:27.650+0000] ERROR ▶ Failed to parse entry URL: http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (
  irate (
      container_cpu_usage_seconds_total{namespace="gitlab-system",pod=~"gitlab-gitlab-shell-7444c568f8-7ghln|gitlab-gitlab-shell-7444c568f8-nrnk8",container!="",pod!=""}[120s]
  )
)
&time=1652746843.256, err: parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  irate (\n      container_cpu_usage_seconds_total{namespace=\"gitlab-system\",pod=~\"gitlab-gitlab-shell-7444c568f8-7ghln|gitlab-gitlab-shell-7444c568f8-nrnk8\",container!=\"\",pod!=\"\"}[120s]\n  )\n)\n&time=1652746843.256": net/url: invalid control character in URL ▶ [1 oas_generator.go:191 getGen]
[2022-05-17T00:21:27.650+0000] ERROR ▶ Failed to parse entry URL: http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (
  irate (
      container_cpu_usage_seconds_total{namespace="gitlab-system",pod=~"gitlab-gitlab-shell-7444c568f8-7ghln|gitlab-gitlab-shell-7444c568f8-nrnk8",container!="",pod!=""}[120s]
  )
)
&time=1652746843.256, err: parse "http://prometheus-k8s.monitoring.svc:9090/api/v1/query?query=sum by (pod,container) (\n  irate (\n      container_cpu_usage_seconds_total{namespace=\"gitlab-system\",pod=~\"gitlab-gitlab-shell-7444c568f8-7ghln|gitlab-gitlab-shell-7444c568f8-nrnk8\",container!=\"\",pod!=\"\"}[120s]\n  )\n)\n&time=1652746843.256": net/url: invalid control character in URL ▶ [1 oas_generator.go:191 getGen]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1b42fea]

goroutine 14 [running]:
github.com/up9inc/mizu/agent/pkg/oas.(*defaultOasGenerator).getGen(0xc000101310, {0xc000f59560, 0x19}, {0xc000231180, 0x12a})
	/app/agent-build/pkg/oas/oas_generator.go:197 +0x1ca
github.com/up9inc/mizu/agent/pkg/oas.(*defaultOasGenerator).handleHARWithSource(0x12f, 0xc000363b10)
	/app/agent-build/pkg/oas/oas_generator.go:172 +0x7e
github.com/up9inc/mizu/agent/pkg/oas.(*defaultOasGenerator).handleEntry(0xc0008ad000, 0xc000cf31e0)
	/app/agent-build/pkg/oas/oas_generator.go:164 +0x39d
github.com/up9inc/mizu/agent/pkg/oas.(*defaultOasGenerator).runGenerator(0xc000101310)
	/app/agent-build/pkg/oas/oas_generator.go:138 +0x4be
created by github.com/up9inc/mizu/agent/pkg/oas.(*defaultOasGenerator).Start
	/app/agent-build/pkg/oas/oas_generator.go:75 +0x25b

To Reproduce
Steps to reproduce the behavior:

  1. Run mizu tap -A
  2. Wait for few mins
  3. Click on service map
  4. See error -> service crashed, not responding
  5. Refresh page, showing
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "no endpoints available for service \"mizu-api-server:80\"",
  "reason": "ServiceUnavailable",
  "code": 503
}
  1. check the log with k logs -f -n mizu mizu-api-server -c mizu-api-server, error message is shown as above.

Expected behavior
Just normally do its job, correctly parsing things it needs to, even it cannot parse stuff, it shouldn't crash

Logs
cannot reproduce issue... I'll try to upload if I see one.

Desktop (please complete the following information):

  • OS: macOS M1
  • Web Browser: Chrome

getting this error in AWS EKS

I use aws EKS , set the kubeconfig using the below command:
aws eks update-kubeconfig --name <cluster_value> --kubeconfig <path/value> --region

then set a alias to the kubeconfig.

While running the mizu command getting the error below:

image

Can not start in AWS IAM environment

Describe the bug
In my organization, we use AWS IAM to authenticate access to our Kubernetes cluster. Mizu seems to have trouble booting up in such an environment.
It just silently fails when running mizu tap PODNAME, but when running mizu view the following error is shown:

Failed to found mizu service services "mizu-api-server" is forbidden: User "<redacted>" cannot get resource "services" in API group "" in the namespace "mizu"

I suspect mizu is incompatible with the authentication mechanism and perhaps that is also the reason why mizu tap PODNAME does nothing.

To Reproduce
Steps to reproduce the behavior:

  1. Have your .kube/config use aws-iam-authenticator to authenticate to the Kubernetes cluster
  2. Run mizu tap PODNAME
  3. Mizu terminates after the log message Waiting for Mizu Agent to start...

Expected behavior
Mizu should boot successfully

Logs

[2022-02-07T22:42:44.902+0100] DEBUG ▶ Checking for newer version... ▶ [19526 versionCheck.go:47 CheckNewerVersion]
[2022-02-07T22:42:44.902+0100] DEBUG ▶ Init config finished
 Final config: {
	"Tap": {
		"UploadIntervalSec": 10,
		"PodRegexStr": ".*",
		"GuiPort": 8899,
		"ProxyHost": "127.0.0.1",
		"Namespaces": null,
		"Analysis": false,
		"AllNamespaces": false,
		"PlainTextFilterRegexes": null,
		"IgnoredUserAgents": null,
		"DisableRedaction": false,
		"HumanMaxEntriesDBSize": "200MB",
		"DryRun": false,
		"Workspace": "",
		"EnforcePolicyFile": "",
		"ContractFile": "",
		"AskUploadConfirmation": true,
		"ApiServerResources": {
			"CpuLimit": "750m",
			"MemoryLimit": "1Gi",
			"CpuRequests": "50m",
			"MemoryRequests": "50Mi"
		},
		"TapperResources": {
			"CpuLimit": "750m",
			"MemoryLimit": "1Gi",
			"CpuRequests": "50m",
			"MemoryRequests": "50Mi"
		},
		"ServiceMesh": false
	},
	"Version": {
		"DebugInfo": false
	},
	"View": {
		"GuiPort": 8899,
		"Url": ""
	},
	"Logs": {
		"FileStr": ""
	},
	"Auth": {
		"EnvName": "up9.app",
		"Token": ""
	},
	"Config": {
		"Regenerate": false
	},
	"AgentImage": "docker.io/up9inc/mizu:0.25.0",
	"KratosImage": "gcr.io/up9-docker-hub/mizu-kratos/stable:0.0.0",
	"KetoImage": "gcr.io/up9-docker-hub/mizu-keto/stable:0.0.0",
	"ImagePullPolicyStr": "Always",
	"MizuResourcesNamespace": "mizu",
	"Telemetry": true,
	"DumpLogs": true,
	"KubeConfigPathStr": "",
	"ConfigFilePath": "/Users/<redacted>/.mizu/config.yaml",
	"HeadlessMode": false,
	"LogLevelStr": "INFO",
	"ServiceMap": false,
	"OAS": false,
	"Elastic": {
		"User": "",
		"Password": "",
		"Url": ""
	}
}
 ▶ [19526 config.go:57 InitConfig]
[2022-02-07T22:42:44.902+0100] INFO  ▶ Mizu will store up to 200MB of traffic, old traffic will be cleared once the limit is reached. ▶ [19526 tap.go:80 func9]
[2022-02-07T22:42:44.902+0100] DEBUG ▶ Using kube config /Users/<redacted>/.kube/config ▶ [19526 provider.go:1176 loadKubernetesConfiguration]
[2022-02-07T22:42:45.839+0100] DEBUG ▶ Finished version validation, github version 0.25.0, current version 0.25.0, took 937.132394ms ▶ [19526 versionCheck.go:95 CheckNewerVersion]
[2022-02-07T22:42:46.898+0100] INFO  ▶ Tapping pods in namespaces "<redacted>" ▶ [19526 tapRunner.go:116 RunMizuTap]
[2022-02-07T22:42:47.276+0100] INFO  ▶ �[1;32m+<redacted>�[0m ▶ [19526 tapRunner.go:186 printTappedPodsPreview]
[2022-02-07T22:42:47.277+0100] INFO  ▶ Waiting for Mizu Agent to start... ▶ [19526 tapRunner.go:126 RunMizuTap]
[2022-02-07T22:42:50.447+0100] DEBUG ▶ Checking for newer version... ▶ [19538 versionCheck.go:47 CheckNewerVersion]
[2022-02-07T22:42:50.447+0100] DEBUG ▶ Init config finished
 Final config: {
	"Tap": {
		"UploadIntervalSec": 10,
		"PodRegexStr": ".*",
		"GuiPort": 8899,
		"ProxyHost": "127.0.0.1",
		"Namespaces": null,
		"Analysis": false,
		"AllNamespaces": false,
		"PlainTextFilterRegexes": null,
		"IgnoredUserAgents": null,
		"DisableRedaction": false,
		"HumanMaxEntriesDBSize": "200MB",
		"DryRun": false,
		"Workspace": "",
		"EnforcePolicyFile": "",
		"ContractFile": "",
		"AskUploadConfirmation": true,
		"ApiServerResources": {
			"CpuLimit": "750m",
			"MemoryLimit": "1Gi",
			"CpuRequests": "50m",
			"MemoryRequests": "50Mi"
		},
		"TapperResources": {
			"CpuLimit": "750m",
			"MemoryLimit": "1Gi",
			"CpuRequests": "50m",
			"MemoryRequests": "50Mi"
		},
		"ServiceMesh": false
	},
	"Version": {
		"DebugInfo": false
	},
	"View": {
		"GuiPort": 8899,
		"Url": ""
	},
	"Logs": {
		"FileStr": ""
	},
	"Auth": {
		"EnvName": "up9.app",
		"Token": ""
	},
	"Config": {
		"Regenerate": false
	},
	"AgentImage": "docker.io/up9inc/mizu:0.25.0",
	"KratosImage": "gcr.io/up9-docker-hub/mizu-kratos/stable:0.0.0",
	"KetoImage": "gcr.io/up9-docker-hub/mizu-keto/stable:0.0.0",
	"ImagePullPolicyStr": "Always",
	"MizuResourcesNamespace": "mizu",
	"Telemetry": true,
	"DumpLogs": false,
	"KubeConfigPathStr": "",
	"ConfigFilePath": "/Users/<redacted>/.mizu/config.yaml",
	"HeadlessMode": false,
	"LogLevelStr": "INFO",
	"ServiceMap": false,
	"OAS": false,
	"Elastic": {
		"User": "",
		"Password": "",
		"Url": ""
	}
}
 ▶ [19538 config.go:57 InitConfig]
[2022-02-07T22:42:50.448+0100] DEBUG ▶ Using kube config /Users/<redacted>/.kube/config ▶ [19538 provider.go:1176 loadKubernetesConfiguration]
[2022-02-07T22:42:50.946+0100] DEBUG ▶ successfully reported telemetry for cmd view ▶ [19538 telemetry.go:36 ReportRun]
[2022-02-07T22:42:51.299+0100] DEBUG ▶ Finished version validation, github version 0.25.0, current version 0.25.0, took 852.210223ms ▶ [19538 versionCheck.go:95 CheckNewerVersion]
[2022-02-07T22:42:52.147+0100] ERROR ▶ Failed to found mizu service services "mizu-api-server" is forbidden: User "<redacted>" cannot get resource "services" in API group "" in the namespace "mizu" ▶ [19538 viewRunner.go:32 runMizuView]


Desktop (please complete the following information):

  • OS: macOS

Additional context
Unfortunately, I am not too familiar with the authentication setup on the AWS side. But it is clear that our users do not have the access rights needed by Mizu.
Is it possible for Mizu to work around this limitation?

Spam email advertisement sent to my work mail

Describe the bug
The bug is that you bug people with unwanted emails.
Sending me advertisement to my PRIVATE, UNPUBLISHED WORK email address, letting me know that there is this new shiny thing I might want to use, is definitely NOT the way to get my attention!

How dare you? Who do you think you are, that you can send people emails without any consent?
How many people do you think this will annoy and piss off, instead of getting you the positive views you want?

I would have been interested, if you didn't do it like this! This totally ruins your reputation.

For the first thing, I want you to tell me where you got my work email address, because that third party who gave it to you is holding my personal information illegally. I didn't give anyone anywhere permission to use my, once again, private, unpublished, work email address.

Please, tell me your source of that information.

And next time, don't spam people! There is enough spam and advertisements on the internet already.
Tech companies should be preventing and killing spam, not participate in this unethical practice!

And no, unsubscribe link doesn't justify your horrible action. Not now, not ever.

To Reproduce
Do nothing, don't publish your email address anywhere.

Expected behavior
No spam getting to my inbox.

Error while proxying request: context canceled

when use " mizu tap ".*"" meet thefollowing error
E1021 21:34:46.539954 3777907 proxy_server.go:147] Error while proxying request: context canceled

logs from mizu-tapper-daemon-set pod:
panic: Error connecting to socket server at ws://mizu-api-server.mizu.svc.cluster.local/wsTapper dial tcp: lookup mizu-api-server.mizu.svc.cluster.local: Try again

Error updating tappers: 415: Unsupported Media Type

截屏2021-10-15 下午4 31 22

Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:26:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:19:22Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

mizu_logs_2021_10_19__13_53_54.zip

config: image-pull-policy: IfNotPresent

I modified the configuration file, but it didn't take effect

Mizu API Server status: Failed - Failed to pull image "gcr.io/up9-docker-hub/mizu/develop:0.21.8": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: unexpected EOF

I can't access gcr.io,so I downloaded the image locally

Please make install timeout configurable or add the option to skip waiting entirely

Describe the bug

On a new local dev cluster + my bad home internet mizu takes more than one minute to start.

namespace/mizu created
configmap/mizu-config created
serviceaccount/mizu-service-account created
clusterrole.rbac.authorization.k8s.io/mizu-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/mizu-cluster-role-binding created
role.rbac.authorization.k8s.io/mizu-role-daemon created
rolebinding.rbac.authorization.k8s.io/mizu-role-binding-daemon created
deployment.apps/mizu-api-server created
service/mizu-api-server created
Waiting for Mizu server to start...
mizu API server was not ready in time

Removing mizu resources

It looks like I'm hitting the timeout that's hardcoded here:

https://github.com/up9inc/mizu/blob/f5bacbd1eac2d2c0bd17857fac2e47eedb9d4e27/cli/cmd/installRunner.go#L104

If I retry the command installation will succeed almost immediately because the relevant images are already pulled and cached by the kubelet.

To Reproduce
Steps to reproduce the behavior:

  1. Be in the United States and have Comcast internet as your only option 😢
  2. Install 0.21.58
  3. Create a brand new local cluster (I am using k3d)
  4. Run mizu install

This will fail due to a timeout. Wait 30 seconds and retry. Now it will succeed because the images finished pulling.

Expected behavior

I would really like the ability to configure this timeout myself including the ability to have an indefinite timeout.

OR give me the option to skip waiting.

OR if you're determined to check the status of Mizu's pods then use conditions to detect failure instead of a timeout.

Combining a default timeout + automatic rollback is really frustrating when trying to script installation - which is what I am doing.

Logs
Upload logs:

I followed the instructions here and it doesn't look like mizu install generates logs.

Desktop (please complete the following information):

  • OS: macOS - using K3d locally

Error "Couldn't connect to API server, check logs" though mizu pods are running in mizu namespace

Describe the bug
I installed mizu 0.12.2 in Macbook. Ran mizu tap "^eric-mesh-gateways*". Got error Couldn't connect to API server, check logs though I could see mizu pods, service and daemonset running successfully in mizu namespace

eechens@EMB-Q6BUMD6N Bin % mizu tap "^eric-mesh-gateways*"
Mizu will store up to 200MB of traffic, old traffic will be cleared once the limit is reached.
Tapping pods in namespaces "mxe-senthil"
+eric-mesh-gateways-65d7667954-6x6wp
+eric-mesh-gateways-65d7667954-fkn5d
Couldn't connect to API server, check logs

Removing mizu resources
eechens@EMB-Q6BUMD6N Bin %
eechens@EMB-Q6BUMD6N ~ % kubectl get all -n mizu
NAME                               READY   STATUS    RESTARTS   AGE
pod/mizu-api-server                1/1     Running   0          6s
pod/mizu-tapper-daemon-set-bwk57   1/1     Running   0          5s
pod/mizu-tapper-daemon-set-zc4ws   1/1     Running   0          5s

NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/mizu-api-server   ClusterIP   10.110.86.225   <none>        80/TCP    6s

NAME                                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/mizu-tapper-daemon-set   2         2         2       2            2           <none>          6s
eechens@EMB-Q6BUMD6N ~ %

To Reproduce
Steps to reproduce the behavior:

  1. Run mizu tap "^eric-mesh-gateways*"

Expected behavior
Expected mizu GUI to open in browser

Logs
mizu_logs_2021_08_31__10_48_41.zip

Screenshots
None

Desktop (please complete the following information):

  • OS: MacOS Catalina
  • Browser: Chrome

Additional context

  • The cluster connectivity is over VPN.
  • kubectl access to the cluster works fine

Spamming people to check out your Github?

I'm not sure where you got my work address from (it's not published anywhere), but spamming people to check out your service/github is not the way to go to get interest in your product. I would suggest a couple of reddit posts/medium articles and being active on StackOverflows DevOps sections to get more interest in this project.

I wouldn't have bothered with coming here, but your unsubscribe link doesn't work, so I assume it's just spamming people over and over with no regard to the laws set in most countries which goes against unsolicited emails.

error: error in k8s watch

Hi,
i've start mizu on my machine but i've that error :

Waiting for Mizu Agent to start...
Watching API server events loop, error: error in k8s watch: the server could not find the requested resource (get events.events.k8s.io)
Mizu is available at http://localhost:8899
Watching tapper events loop, error: error in k8s watch: the server could not find the requested resource (get events.events.k8s.io)

do you know how can i fix?

Ability to limit cluster level permissions necessary to run mizu

The minimum permissions needed to run mizu for the default cluster namespace are below:

- apiGroups:
  - ""
  - apps
  resources:
  - pods
  - services
  verbs:
  - list
  - get
  - create
  - delete
- apiGroups:
  - ""
  - apps
  resources:
  - daemonsets
  verbs:
  - list
  - get
  - create
  - patch
  - delete

However, when running in a shared cluster environments having these permissions on the default namespace is not possible for lot of reasons. It would be great if cluster level permissions are restricted to a specific namespace.

Happy to discuss more and contribute.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.