mondoohq / cnquery Goto Github PK
View Code? Open in Web Editor NEWopen source, cloud-native, graph-based asset inventory
Home Page: https://cnquery.io
License: Other
open source, cloud-native, graph-based asset inventory
Home Page: https://cnquery.io
License: Other
Is your feature request related to a problem? Please describe.
It looks like os.updates
can't detect the Fedora 36 and Pop!_OS 22.04 LTS package manager:
mondoo> os.updates
[failed] os.updates
error: could not detect suiteable update manager for platform
The packages
resource on the other hand seems to work:
mondoo> packages.list
packages.list: [
0: package id = deb://accountsservice/0.6.55-0ubuntu14.1pop0~1648743814~22.04~a006f82/amd64
1: package id = deb://acl/2.3.1-1/amd64
...
Describe the solution you'd like
The OS update resource should work like for other distros. General Pop!_OS support was implemented in mondoohq/installer#126.
Describe the bug
I found that when using a dot notation on a resource like keypair.name
vs keypair {name}
, if the value is null the dot notation fails with cannot cast resource to resource type: <nil>
To Reproduce
Steps to reproduce the behavior:
cnquery shell aws
aws.ec2.instances { keypair.name }
cnquery> aws.ec2.instances.where( publicIp != '' ) { keypair.name }
Query encountered errors:
1 error occurred:
* cannot cast resource to resource type: <nil>
aws.ec2.instances.where: [
0: {
keypair.name: "scottford"
}
...
5: {
keypair.name: cannot cast resource to resource type: <nil>
Expected behavior
I would expect that keypair.name
return:
cnquery> aws.ec2.instances.where( publicIp != '' ) { keypair.name }
aws.ec2.instances.where: [
0: {
keypair.name: null
}
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
Additional context
Add any other context about the problem here.
Describe the bug
When scanning a Windows Nano container image, you get an error that no Linux OS was found.
To Reproduce
Steps to reproduce the behavior:
cnspec scan docker image openjdk:18.0-nanoserver --incognito
Expected behavior
Windows container image should be scanned
Desktop (please complete the following information):
Additional context
Nope
macos.security
fails with the follwoing error:
cnquery> macos.security {*}
[failed] macos.security
error: 1 error occurred:
* the implementation is deprecated
cnquery> help macos.security
macos.security: macOS keychains and Security framework
authorizationDB dict: Deprecated: Authorization policy database
cnquery> macos.security { authorizationDB }
[failed] macos.security
error: 1 error occurred:
* the implementation is deprecated
SSH host is setup with an authorized key. SSH authentication is seamless, but Mondoo cannot scan the host:
cnspec scan ssh 123.123.123.123 --incognito
→ loaded configuration from /Users/tsmith/.config/mondoo/mondoo.yml using source default
→ discover related assets for 1 asset(s)
! could not determine credentials for asset name=
! could not find keys in ssh agent
→ resolved assets resolved-assets=0
x could not resolve asset error="no authentication method defined" asset=123.123.123.123
FTL failed to run scan error="failed to resolve multiple assets"
I tested this with cnquery v7.
However, if I type the full resource, I do get autocompletion for its properties.
Seems to be related to #215
Describe the bug
If you install cnquery on a system that already has mondoo via the install script it fails when it detects that mondoo is already there.
To Reproduce
Steps to reproduce the behavior:
bash -c "$(curl -sSL https://install.mondoo.com/sh/cnquery)"
Expected behavior
It should run just fine
Screenshots
* Mondoo cnquery is already installed. Updating Mondoo...
* Upgrade Mondoo cnquery via 'brew upgrade'
Running `brew update --auto-update`...
==> Auto-updated Homebrew!
Updated 2 taps (homebrew/core and homebrew/cask).
You have 1 outdated formula installed.
You can upgrade it with brew upgrade
or list it with brew outdated.
Error: mondoohq/mondoo/cnquery not installed
The Mondoo cnquery install script encountered a problem. For assistance, please join our community Slack or find us on GitHub.
Desktop (please complete the following information):
I get panic in my mondoo shell k8s
if I use k8s.pod.podSpec
. I expect something like: resource not usable without discovery flag.
→ loaded configuration from /home/user/.config/mondoo/mondoo.yml using source default
→ discover related assets for 1 asset(s)
→ use cluster name from kube config cluster-name=arn:aws:eks:us-east-2:921877552404:cluster/patrick-container-escape-demo-udb3-cluster
→ resolved assets resolved-assets=1
.-.
: :
,-.,-.,-. .--. ,-.,-. .-' : .--. .--. ™
: ,. ,. :' .; :: ,. :' .; :' .; :' .; :
:_;:_;:_;`.__.':_;:_;`.__.'`.__.'`.__.' interactive shell
mondoo> k8s.pod.podSpec
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x5ba73e2]
goroutine 257 [running]:
go.mondoo.com/cnquery/resources/packs/k8s.initNamespacedResource[...](0xc0004e4210, 0x7827120, 0x82a3f90?)
/home/user/go/pkg/mod/go.mondoo.com/[email protected]/resources/packs/k8s/common.go:232 +0x462
go.mondoo.com/cnquery/resources/packs/k8s.(*mqlK8sPod).init(0xc000f5b5d0?, 0x40f4df?)
/home/user/go/pkg/mod/go.mondoo.com/[email protected]/resources/packs/k8s/pod.go:59 +0x30
go.mondoo.com/cnquery/resources/packs/k8s.newK8sPod(0xc001ec73b0, 0x0?)
/home/user/go/pkg/mod/go.mondoo.com/[email protected]/resources/packs/k8s/k8s.lr.go:2344 +0xcf
go.mondoo.com/cnquery/resources.(*Runtime).CreateResourceWithID(0xc001ec73b0, {0xc00087f137, 0x7}, {0x0, 0x0}, {0xcbacc70, 0x0, 0x0})
/home/user/go/pkg/mod/go.mondoo.com/[email protected]/resources/runtime.go:155 +0x1c5
go.mondoo.com/cnquery/resources.(*Runtime).CreateResource(...)
/home/user/go/pkg/mod/go.mondoo.com/[email protected]/resources/runtime.go:186
go.mondoo.com/cnquery/llx.(*blockExecutor).createResource(0xc0003204d0, {0xc00087f137, 0x7}, 0x7?, 0x1?)
/home/user/go/pkg/mod/go.mondoo.com/[email protected]/llx/llx.go:652 +0xbf
go.mondoo.com/cnquery/llx.(*blockExecutor).runGlobalFunction(0xc0003204d0, 0xc000c4e5a0, 0xc9ede60, 0x7f66a00b29f0?)
/home/user/go/pkg/mod/go.mondoo.com/[email protected]/llx/llx.go:697 +0x2e5
go.mondoo.com/cnquery/llx.(*blockExecutor).runFunction(0xc00186c9d8?, 0xc000f5ba38?, 0x8a00000000000008?)
/home/user/go/pkg/mod/go.mondoo.com/[email protected]/llx/llx.go:723 +0x130
go.mondoo.com/cnquery/llx.(*blockExecutor).runChunk(0xc0003204d0, 0xc000f5ba58?, 0x52e6d4c?)
/home/user/go/pkg/mod/go.mondoo.com/[email protected]/llx/llx.go:754 +0x2a8
go.mondoo.com/cnquery/llx.(*blockExecutor).runRef(0xc00185d290?, 0x6ee3760?)
/home/user/go/pkg/mod/go.mondoo.com/[email protected]/llx/llx.go:779 +0xef
go.mondoo.com/cnquery/llx.(*blockExecutor).runChain(0xc0003204d0, 0x7e9535a?)
/home/user/go/pkg/mod/go.mondoo.com/[email protected]/llx/llx.go:811 +0xd4
go.mondoo.com/cnquery/llx.(*blockExecutor).run(0xc0003204d0)
/home/user/go/pkg/mod/go.mondoo.com/[email protected]/llx/llx.go:337 +0x2d7
go.mondoo.com/cnquery/llx.(*MQLExecutorV2).Run(0xc00187c4e0?)
/home/user/go/pkg/mod/go.mondoo.com/[email protected]/llx/llx.go:269 +0x59
go.mondoo.com/mondoo/policy/executor/internal.(*executionManager).executeCodeBundle(0xc000c4e6e0, 0xc001ec5400, 0x408bd1?, {0x0, 0x0})
/home/user/go/src/go.mondoo.io/mondoo/policy/executor/internal/execution_manager.go:168 +0x365
go.mondoo.com/mondoo/policy/executor/internal.(*executionManager).Start.func1()
/home/user/go/src/go.mondoo.io/mondoo/policy/executor/internal/execution_manager.go:87 +0x1cb
created by go.mondoo.com/mondoo/policy/executor/internal.(*executionManager).Start
/home/user/go/src/go.mondoo.io/mondoo/policy/executor/internal/execution_manager.go:56 +0x6a
```
We are temporarily deactivating the following tests, which need to be reactivated and fixed:
tTestOperations_Equality
- requires all return values to be given and accessible to the test suiteThe kernel.parameters
resource does not work for running containers. To reproduce, start a new container:
docker run -it rockylinux
Then connect with the shell to the container and the response takes way too long / never returns:
cnquery shell docker 89e22f207cc6
→ loaded configuration from /Users/chris/.config/mondoo/mondoo.yml using source default
→ discover related assets for 1 asset(s)
→ resolved assets resolved-assets=1
.--. ,-.,-. .---..-..-. .--. .--. .-..-.™
' ..': ,. :' .; :: :; :' '_.': ..': :; :
`.__.':_;:_;`._. ;`.__.'`.__.':_; `._. ;
mondoo™ : : .-. :
:_: `._.' interactive shell
cnquery> kernel.parameters
Describe the bug
[19/10/22 02:08:01] ❯ AWS_REGION=us-east-1 AWS_PROFILE="vvdefault" cnquery shell aws ec2 ssm [email protected]
→ loaded configuration from /Users/vj/.config/mondoo/mondoo.yml using source default
→ discover related assets for 1 asset(s)
→ resolved assets resolved-assets=0
x could not connect to asset error="operation error EC2: DescribeInstances, https response error StatusCode: 400, RequestID: 1164b9f9-09b7-43ee-b352-ec23afd26700, api error MissingParameter: The request must contain the parameter InstanceId" asset=
FTL could not resolve assets
vj@vj-macpro ~
[19/10/22 02:08:11] ❯ AWS_REGION=us-east-1 AWS_PROFILE="vvdefault" cnquery shell aws ec2 ssm ec2-user@i-0fd8fc27329747cca
→ loaded configuration from /Users/vj/.config/mondoo/mondoo.yml using source default
→ discover related assets for 1 asset(s)
→ resolved assets resolved-assets=0
x could not connect to asset error="operation error EC2: DescribeInstances, https response error StatusCode: 400, RequestID: 4c11312c-cf84-493f-a4d8-91af192018ac, api error MissingParameter: The request must contain the parameter InstanceId" asset=
FTL could not resolve assets
trying with instance connect works:
vj@vj-macpro ~
[19/10/22 02:08:19] ❯ AWS_REGION=us-east-1 AWS_PROFILE="vvdefault" cnquery shell aws ec2 instance-connect ec2-user@i-0fd8fc27329747cca
→ loaded configuration from /Users/vj/.config/mondoo/mondoo.yml using source default
→ discover related assets for 1 asset(s)
→ resolved assets resolved-assets=1
___ _ __ __ _ _ _ ___ _ __ _ _
/ __| '_ \ / _` | | | |/ _ \ '__| | | |
| (__| | | | (_| | |_| | __/ | | |_| |
\___|_| |_|\__, |\__,_|\___|_| \__, |
mondoo™ |_| |___/ interactive shell
cnquery>
Describe the bug
Running the MQL query:
aws.applicationAutoscaling {*}
cnquery> aws.applicationAutoscaling { * }
Query encountered errors:
failed to create resource 'aws.applicationAutoscaling': namespace required: "aws.applicationAutoscaling" failed: no value provided for static field "namespace"
aws.applicationAutoscaling: no data available
To Reproduce
Steps to reproduce the behavior:
cnquery shell aws
aws.applicationAutoscaling { * }
Desktop (please complete the following information):
cnquery 7.0.0 (0e7dc2e6, 2022-10-18T19:15:48Z)
Additional context
Add any other context about the problem here.
This all started when I was trying to write a test that verifies a recent bugfix that was fixed in 6.17.1, the test in particular:
https://github.com/mondoohq/cnquery/pull/120/files#diff-582761cfd929b951df886476c391e4d0534ccc99cd509df02d07e35dddfd4e7e
It turns out that this bug only reproduces if we go through the full flow of what the shell does -> inventory manager that perform discovery -> motor creation -> query execution. If we were to skip a step here (say create the motor directly) the issue wouldn't show up. This led me to looking in this file, which is the where the whole flow that was causing the issue was:
https://github.com/mondoohq/cnquery/blob/main/apps/cnquery/cmd/shell_run.go#L36
Looking at this, I notice that the asset resolving/motor connection is tied to the shell, I think we should separate this from shell. Ideally there should be a function that takes in a configuration and returns an asset that can be connected to. Then the shell can be simplified to just (pseudocode):
1. Parsing shell params into inventory config
2. Call GetAssetToConnectTo (for a lack of a better name right now) with the config
3. Open a new shell with the asset's motor.
This will make also testing easier as:
When running an admission controller scan the following warning appears:
namespace "" not found in cluster
I suppose that warning should only show up when there is a namespace filter specified. When there is no namespace filter, there should be no warning either.
Unless we're going to use it we should turn off GH Projects.
We activated the asset discovery as default for k8s. This works great for:
cnquery explore k8s # nothing is provided, therefore the value of discover is `auto`
cnspec explore k8s # nothing is provided, therefore the value of discover is `auto`
We identified a few shortcomings with shell
and run
subcommand:
cnquery shell k8s
→ discovered 10 asset(s)
name: kube-system/coredns (k8s-object)
platform-id: //platformid.api.mondoo.app/runtime/k8s/uid/a184e66f-32ba-4034-ab1e-2ae9fa1724a9/namespace/kube-system/deployments/name/coredns
name: kube-system/coredns-64897985d-b7t87 (k8s-object)
platform-id: //platformid.api.mondoo.app/runtime/k8s/uid/a184e66f-32ba-4034-ab1e-2ae9fa1724a9/namespace/kube-system/pods/name/coredns-64897985d-b7t87
...
Since the shell is by design interactive, it should prompt the user for the asset by using a list. This would make it way easier to interact with may assets.
cnquery shell k8s
Instead of asking the user to specify the target asset, we should just run the query on all assets:
spacerocket.local:..ce/cnquery ±> cnquery run k8s --query "k8s.pod.name" --all-namespaces --platform-id //platformid.api.mondoo.app/runtime/k8s/uid/a184e66f-32ba-4034-ab1e-2ae9fa1724a9/namespace/kube-system/pods/name/kube-controller-manager-minikube
→ discover related assets for 1 asset(s)
→ use cluster name from kube config cluster-name=minikube
→ resolved assets resolved-assets=10
k8s.pod.name: "kube-controller-manager-minikube"
k8s.pod.name: "storage-provisioner"
Things we need to do to achieve that
auto
for all discovery resolversDescribe the bug
The first two characters of some (but not all) resource descriptions are cut off.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
All descriptions should be complete.
Screenshots
yum: Yum package manager resource
repo yum.repo:
repos []yum.repo: st of all configured yum repositories
vars map[string]string: riables defined built-in in Yum configuration files (/etc/yum.conf and all .repo files in the /etc/yum.repos.d/)
Desktop (please complete the following information):
Additional context
The comments are correct in the source code
Describe the bug
When a user gives a manifest to cnquery on the CLI we create a manifest name from that, but it doesn't match that file provided in any way that would help the user later identify what was scanned.
To Reproduce
Steps to reproduce the behavior:
cnquery shell k8s ~/dev/example.yml
dev
: K8S Manifest dev (kubernetes)
Expected behavior
It should probably be example.yml
Screenshots
N/A
Desktop (please complete the following information):
N/A
Additional context
N/A
Follow-up to
#261
for the test that was deactivated.
It currently leads to non-assertion output and is not deterministic:
https://github.com/mondoohq/cnquery/actions/runs/3238668540/jobs/5307137769
We'd want a mix of the assertion and data output here. This is also true for other controls which are successful, but also produce data output. In essence:
System Software Overview:
System Version: macOS 12.6 (21G115)
Kernel Version: Darwin 21.6.0
macos.userPreferences
[failed] macos.userPreferences
error: plist: error parsing XML property list: parsing time "2147483647-07-28T00:00:00Z" as "2006-01-02T15:04:05Z07:00": cannot parse "483647-07-28T00:00:00Z" as "-"
All these functions are almost identical:
We should re-write them such that there is one definition of the function and it is being re-used for all those specific types. This was already done for the init
functions of the k8s MQL resources. It should be possible for this case as well. See here for a generic function re-used by all workload types
I am trying to query all of my GCP compute instances but I am getting different results from the gcloud
cli and cnquery
:
gcloud compute instances list --filter="status:running"
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
instance-edge us-central1-a e2-medium 10.128.15.234 x.x.x.x RUNNING
windows-gitlab us-central1-a e2-medium 10.128.15.229 x.x.x.x RUNNING
terraform-instance us-central1-c f1-micro 10.128.0.2 x.x.x.x RUNNING
gcloud.compute.instances.where( status == "RUNNING" ) { name }
gcloud.compute.instances.where: [
0: {
name: "instance-edge"
}
1: {
name: "windows-gitlab"
}
]
I have exited and re-entered the shell multiple times. I have also validated the results via the Google Cloud Console.
Describe the bug
→ run policies for asset asset=i-032302d0105f9c9c8
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x4a90f20]
goroutine 127 [running]:
go.mondoo.com/cnquery/resources/packs/os/smbios.(*LinuxSmbiosManager).Info.func1({0x6eb3ab2, 0x12}, {0x0, 0x0}, {0x150?, 0x69104c0?})
/go/pkg/mod/go.mondoo.com/[email protected]/resources/packs/os/smbios/linux.go:29 +0x60
To Reproduce
this was a filesystem scan (mondoo scan aws ec2 ebs instanceid
) on an ec2 instance in aws (ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20220609
)
mistyping a letter can cause the whole cnquery to panic crash, reproduce by:
cnquery shell local
and then users(uid:09
, it will immediately panic as we seem to parse input without delay here.
cnquery> users(uid: 0panic: Failed to parse integer: strconv.ParseInt: parsing "09": invalid syntax
goroutine 1 [running]:
go.mondoo.com/cnquery/mqlc/parser.(*parser).parseValue(0x14000c3f3c0)
/Users/preslavgerchev/go/src/go.mondoo.com/cnquery/mqlc/parser/parser.go:319 +0x484
go.mondoo.com/cnquery/mqlc/parser.(*parser).parseOperand(0x14000c3f3c0)
/Users/preslavgerchev/go/src/go.mondoo.com/cnquery/mqlc/parser/parser.go:474 +0x28
go.mondoo.com/cnquery/mqlc/parser.(*parser).parseExpression(0x14000c3f3c0)
/Users/preslavgerchev/go/src/go.mondoo.com/cnquery/mqlc/parser/parser.go:823 +0x68
go.mondoo.com/cnquery/mqlc/parser.(*parser).parseArg(0x14000c3f3c0)
/Users/preslavgerchev/go/src/go.mondoo.com/cnquery/mqlc/parser/parser.go:366 +0x21c
go.mondoo.com/cnquery/mqlc/parser.(*parser).parseOperand(0x14000c3f3c0)
/Users/preslavgerchev/go/src/go.mondoo.com/cnquery/mqlc/parser/parser.go:553 +0xda8
go.mondoo.com/cnquery/mqlc/parser.(*parser).parseExpression(0x14000c3f3c0)
/Users/preslavgerchev/go/src/go.mondoo.com/cnquery/mqlc/parser/parser.go:823 +0x68
go.mondoo.com/cnquery/mqlc/parser.Parse({0x14000377cd0, 0xd})
/Users/preslavgerchev/go/src/go.mondoo.com/cnquery/mqlc/parser/parser.go:873 +0x16c
go.mondoo.com/cnquery/mqlc.compile({0x14000377cd0?, 0x2?}, 0x2?, 0x14000c3f402?)
/Users/preslavgerchev/go/src/go.mondoo.com/cnquery/mqlc/mqlc.go:1521 +0x34
go.mondoo.com/cnquery/mqlc.Compile({0x14000377cd0, 0xd}, 0xd?, {0x10f352d18?, 0x7200000065?, 0x2800000073?}, 0x6900000075?)
/Users/preslavgerchev/go/src/go.mondoo.com/cnquery/mqlc/mqlc.go:1558 +0x64
go.mondoo.com/cnquery/cli/shell.(*Completer).CompletePrompt(0x14000350c60, {{0x14000377cc0, 0xd}, 0xd, 0x56})
/Users/preslavgerchev/go/src/go.mondoo.com/cnquery/cli/shell/completer.go:40 +0x100
github.com/c-bata/go-prompt.(*CompletionManager).Update(...)
/Users/preslavgerchev/go/pkg/mod/github.com/c-bata/[email protected]/completion.go:68
github.com/c-bata/go-prompt.(*Prompt).Run(0x140012e8000)
/Users/preslavgerchev/go/pkg/mod/github.com/c-bata/[email protected]/prompt.go:99 +0x590
go.mondoo.com/cnquery/cli/shell.(*Shell).RunInteractive(0x140012c8000, {0x0, 0x0})
/Users/preslavgerchev/go/src/go.mondoo.com/cnquery/cli/shell/shell.go:190 +0x780
go.mondoo.com/cnquery/apps/cnquery/cmd.StartShell(0x140009c2cc0)
/Users/preslavgerchev/go/src/go.mondoo.com/cnquery/apps/cnquery/cmd/shell_run.go:95 +0x5e4
go.mondoo.com/cnquery/apps/cnquery/cmd.glob..func8(0x0?, {0x10f7e8950?, 0x0?, 0x0?}, 0x0?, 0x0?)
/Users/preslavgerchev/go/src/go.mondoo.com/cnquery/apps/cnquery/cmd/shell.go:230 +0x64
go.mondoo.com/cnquery/apps/cnquery/cmd/builder.localProviderCmd.func1(0x14001121400?, {0x10f7e8950?, 0x0?, 0x0?})
/Users/preslavgerchev/go/src/go.mondoo.com/cnquery/apps/cnquery/cmd/builder/builder.go:186 +0x30
github.com/spf13/cobra.(*Command).execute(0x14001121400, {0x10f7e8950, 0x0, 0x0})
/Users/preslavgerchev/go/pkg/mod/github.com/spf13/[email protected]/command.go:876 +0x4b8
github.com/spf13/cobra.(*Command).ExecuteC(0x10f5f5880)
/Users/preslavgerchev/go/pkg/mod/github.com/spf13/[email protected]/command.go:990 +0x354
github.com/spf13/cobra.(*Command).Execute(...)
/Users/preslavgerchev/go/pkg/mod/github.com/spf13/[email protected]/command.go:918
go.mondoo.com/cnquery/apps/cnquery/cmd.Execute()
/Users/preslavgerchev/go/src/go.mondoo.com/cnquery/apps/cnquery/cmd/root.go:39 +0x28
main.main()
/Users/preslavgerchev/go/src/go.mondoo.com/cnquery/apps/cnquery/cnquery.go:6 +0x1c
i think we can make this a bit more friendlier to not crash (or delay input parsing somehow?)
to clarify the crash: the 0 in the front means cnquery expects an octal input and 9 is not an octal valid digit, hence the crash
When executing the shell with no additional arguments the error is cryptic to the new user, we should catch this scenario and print a message like "No Asset Specified; did you mean 'cnquery shell local'?"
Here's how it looks today:
$ cnquery shell
→ loaded configuration from xxxxxxxxxxxx using source $MONDOO_CONFIG_PATH
→ discover related assets for 0 asset(s)
→ resolved assets resolved-assets=0
FTL could not find an asset that we can connect to
Is your feature request related to a problem? Please describe.
cnquery
does not make it clear that the transport supports GCP organizations and projects.
gcp Scan a Google Cloud Platform (GCP) account
It should be expected that users of cnquery
may have more than one organization, and/or multiple projects within a given organization. Additionally, Google Cloud Hierarchy has a concept of folders that "provide an additional grouping mechanism and isolation boundaries between projects."
cnquery
for GCP could be improved by adding additional functionality for scanning organizations, projects, and potentially GCP folders as well.
The gcloud
CLI has the functionality to create multiple configurations and to switch between those configurations to target actions on a given Organization or Project. See Managing gcloud CLI properties in Google Cloud's documentation.
The gcloud
can also be configured using environment variables that follow the pattern CLOUDSDK_SECTION_NAME_PROPERTY_NAME
. The environment variables take precedence over property values set using gcloud config set. See Setting properties using environment variables in the Google Cloud Docs.
Describe the solution you'd like
cnquery scan gcp
should add the following functionality:
cnquery scan gcp org <org_id> - this should also support CLOUDSDK_
environment variables and the current active config.
Organization scans should support --all
projects within an organization, as well as a specific list of projects using the --inventory-file
.
cnquery scan gcp project <project_id> - this should also support CLOUDSDK_
environment variables and the current active config. Additionally, we should support --inventory-file
to provide a specific list of project IDs.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context or screenshots about the feature request here.
Describe the bug
I'm trying to scan a Linux host over ssh and that requires using Sudo to elevate my privs. I assumed I should just add the --sudo
flag to my scan ssh command, but it's not working and not producing any errors either.
To Reproduce
Steps to reproduce the behavior:
cnspec scan ssh SOME_HOST_IP --incognito --sudo
Expected behavior
Either an error that passwordless sudo is not enabled or a prompt for the sudo password.
Desktop (please complete the following information):
I have an RKE cluster and controls that validate the kubelet config are failing with the following error:
failed to create resource 'k8s.kubelet': error when getting file content: open /mnt/host/var/lib/kubelet/config.yaml: no such file or directory; failed to create resource 'k8s.kubelet': error when getting file content: open /mnt/host/var/lib/kubelet/config.yaml: no such file or directory
Describe the bug
cnquery> aws.rds.dbClusters {*}
Query encountered errors:
1 error occurred:
* failed to validate resource 'aws.rds.snapshot': Initialized "aws.rds.snapshot" resource without a "tags". This field is required.
aws.rds.dbClusters: [
0: {
region: "us-west-2"
tags: {
git_file: "terraform/aws/neptune.tf"
git_org: "lunalectric-mgmtlabs"
}
snapshots: failed to validate resource 'aws.rds.snapshot': Initialized "aws.rds.snapshot" resource without a "tags". This field is required.
members: [
aws.rds.dbinstance arn="arn:aws:rds:us-west-2:177043759486:db:tf-20221021184537453200000003"
]
id: "neptunedb1"
arn: "arn:aws:rds:us-west-2:177043759486:cluster:neptunedb1"
}
]
Describe the bug
When the cnquery install script fails it sends you to slack or the mondoo client github repo.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
It mentions cnquery and sends you there.
Screenshots
~/dev bash -c "$(curl -sSL https://install.mondoo.com/sh/cnquery)"
Mondoo Installer
.-.
: :
,-.,-.,-. .--. ,-.,-. .-' : .--. .--. ™
: ,. ,. :' .; :: ,. :' .; :' .; :' .; :
:_;:_;:_;`.__.':_;:_;`.__.'`.__.'`.__.
Welcome to the Mondoo installer. We will auto-detect
your operating system to determine the best installation method.
If you experience any issues, please reach us at:
* Mondoo Community Slack https://mondoo.link/slack
The source code of this installer is available on GitHub:
* GitHub: https://github.com/mondoohq/client
* Mondoo cnquery is already installed. Updating Mondoo...
* Upgrade Mondoo cnquery via 'brew upgrade'
Running `brew update --auto-update`...
==> Auto-updated Homebrew!
Updated 2 taps (homebrew/core and homebrew/cask).
You have 1 outdated formula installed.
You can upgrade it with brew upgrade
or list it with brew outdated.
Error: mondoohq/mondoo/cnquery not installed
The Mondoo cnquery install script encountered a problem. For assistance, please join our community Slack or find us on GitHub.
* Mondoo Community Slack https://mondoo.link/slack
* GitHub: https://github.com/mondoohq/client
Describe the bug
The id for namespaces looks strange:
mondoo> k8s.namespaces{ id }
k8s.namespaces: [
0: {
id: "namespace::default"
}
1: {
id: "namespace::kube-node-lease"
}
2: {
id: "namespace::kube-public"
}
3: {
id: "namespace::kube-system"
}
]
Looks like something is missing between the two colons.
To Reproduce
Steps to reproduce the behavior:
k8s.namespaces{ id }
Expected behavior
I would expect only one colon to separate type and name.
Additional context
Looks like we have to distinguish between namespaced and non-namespaced objects:
cnquery/resources/packs/k8s/common.go
Line 165 in a5bce55
When the user tries to ssh scan to a host that requires a password, but the user has not supplied one or specified ask-pass we give a pretty cryptic message
cnspec scan ssh [email protected] --incognito
→ loaded configuration from /Users/tsmith/.config/mondoo/mondoo.yml using source default
→ discover related assets for 1 asset(s)
→ resolved assets resolved-assets=0
x could not resolve asset error="ssh: handshake failed: ssh: unable to authenticate, attempted methods [none password], no supported methods remain" asset=123.123.123.123
FTL failed to run scan error="failed to resolve multiple assets"
We should help the user out here by suggesting how they can properly authenticate to this asset.
Describe the bug
cnquery> macos.security
macos.security: macos.security id = macos.security
cnquery> macos.security {*}
Query encountered errors:
1 error occurred:
* the implementation is deprecated
macos.security: {
authorizationDB: the implementation is deprecated
}
Desktop (please complete the following information):
platform: {
arch: "arm64"
title: "macOS, bare metal"
release: "12.6"
}
Additional context
Add any other context about the problem here.
On my Ubiquiti Ubios system when scanning over ssh I get the following panic:
./mondoo scan ssh [email protected] --ask-pass --verbose
Enter password:
DBG check if we got the scan config from pipe isNamedPipe=false isTerminal=true size=0
DBG parse url url=ssh://[email protected]
DBG load ssh identity key=/Users/tsmith/.ssh/id_rsa
DBG load ssh identity key=/Users/tsmith/.ssh/id_ed25519
→ Mondoo 6.14.0-19 (Space: "//captain.api.mondoo.app/spaces/practical-visvesvaraya-957532", Service Account: "2BuCJL2GJxKhS64EdL5T8MN5bMF", Managed Client: "2BuCJOxCitOI8G85nQheCAI5AhK")
→ loaded configuration from /Users/tsmith/.config/mondoo/mondoo.yml using source default
DBG execute policies
DBG local> run command uname -s
DBG local> run command uname -m
DBG local> run command /usr/bin/sw_vers
DBG platform> detected os family=["darwin","bsd","unix","os"] platform=macos
DBG local> run command hostname
DBG local> run command hostname
DBG initialize client authentication issuer=mondoo/ams kid=//agents.api.mondoo.app/spaces/practical-visvesvaraya-957532/serviceaccounts/2BuCJL2GJxKhS64EdL5T8MN5bMF
DBG local> run command hostname
DBG local> run command hostname
DBG successful health check
→ discover related assets for 1 asset(s)
DBG run resolver resolver="Standard Resolver" resolver-id=ssh
DBG establish motor connection
DBG fetch secret from vault secret-id=2EP0kH6c4kZbGqDZ1nJ5WGYtumw vault="In-Memory Vault"
DBG fetch secret from vault secret-id=2EP0kNBTJv1NaHazj7l8nG6Il2i vault="In-Memory Vault"
DBG fetch secret from vault secret-id=2EP0kHiXYkM0U16DsHjm5DwvjjJ vault="In-Memory Vault"
DBG fetch secret from vault secret-id=2EP0kLC5cuYa0WOPoT8qTisVlj3 vault="In-Memory Vault"
DBG connection> load ssh transport
DBG load ssh known_hosts file file=/Users/tsmith/.ssh/known_hosts
DBG enabled ssh password authentication
DBG enabled ssh private key authentication
DBG enabled ssh private key authentication
DBG enabled ssh agent authentication
DBG ssh agent socket found socket=/private/tmp/com.apple.launchd.xa8Bo3TVA8/Listeners
! could not find keys in ssh agent
DBG discovered ssh auth methods methods=2
DBG skip hostkey check the hostkey since the algo is not supported yet
DBG ssh session established host=172.16.1.1 port=22 provider=ssh server=SSH-2.0-dropbear_2018.76
DBG run command command="echo 'hi'" provider=ssh
DBG run command command="uname -s" provider=ssh
DBG run command command="uname -m" provider=ssh
→ use scp instead of sftp
DBG initialized ssh filesystem file-transfer=scpfs
DBG platform> cannot parse lsb config on this linux system error="failed to read scp message header: err=failed to write scp replyOK reply: err=EOF"
DBG run command command="uname -m" provider=ssh
DBG run command command="uname -s" provider=ssh
DBG run command command="test -e /bin/busybox" provider=ssh
DBG run command command="stat -L /bin/busybox --printf '%s\n%f\n%u\n%g\n%X\n%Y\n%C'" provider=ssh
DBG could not parse file stat information path=/bin/busybox
DBG platform> we do not know the linux system, but we do our best in guessing
DBG platform> detected os family=["linux","unix","os"] platform=ubios
DBG run command command=hostname provider=ssh
DBG unable to read /sys/class/dmi/id/product_version error="failed to read scp message header: err=failed to write scp replyOK reply: err=EOF"
DBG unable to read /sys/class/dmi/id/product_name error="failed to read scp message header: err=failed to write scp replyOK reply: err=EOF"
DBG unable to read /sys/class/dmi/id/sys_vendor error="failed to read scp message header: err=failed to write scp replyOK reply: err=EOF"
DBG unable to read /sys/class/dmi/id/bios_vendor error="failed to read scp message header: err=scp: /sys/class/dmi/id/bios_vendor: No such file or directory\n"
DBG detected platform ids id-detector=["transport-platform-id","hostname","cloud-detect"] platform-ids=["//platformid.api.mondoo.app/runtime/ssh/hostkey/SHA256-t+bvFvdw9LEH3T4UFWZ3XJs8/RBn7prLqjUzGl7PLf4","//platformid.api.mondoo.app/hostname/pdx-udm",""]
DBG run command command=hostname provider=ssh
→ resolved assets resolved-assets=1
DBG synchronize asset found=1
→ establish connection to asset pdx-udm (unknown)
DBG establish connection to asset connection=ssh://172.16.1.1 insecure=false
DBG establish motor connection
DBG fetch secret from vault secret-id=2EP0kH6c4kZbGqDZ1nJ5WGYtumw vault="In-Memory Vault"
DBG fetch secret from vault secret-id=2EP0kNBTJv1NaHazj7l8nG6Il2i vault="In-Memory Vault"
DBG fetch secret from vault secret-id=2EP0kHiXYkM0U16DsHjm5DwvjjJ vault="In-Memory Vault"
DBG fetch secret from vault secret-id=2EP0kLC5cuYa0WOPoT8qTisVlj3 vault="In-Memory Vault"
DBG connection> load ssh transport
DBG load ssh known_hosts file file=/Users/tsmith/.ssh/known_hosts
DBG enabled ssh password authentication
DBG enabled ssh private key authentication
DBG enabled ssh private key authentication
DBG enabled ssh agent authentication
DBG ssh agent socket found socket=/private/tmp/com.apple.launchd.xa8Bo3TVA8/Listeners
! could not find keys in ssh agent
DBG discovered ssh auth methods methods=2
DBG skip hostkey check the hostkey since the algo is not supported yet
DBG ssh session established host=172.16.1.1 port=22 provider=ssh server=SSH-2.0-dropbear_2018.76
DBG run command command="echo 'hi'" provider=ssh
DBG established connection
→ run policies for asset asset=pdx-udm
DBG request policies for asset asset=//assets.api.mondoo.app/spaces/practical-visvesvaraya-957532/assets/2EOzIIqaMM7r20kfrACErb2hnja
DBG marketplace> fetch policy bundle from upstream policy=//assets.api.mondoo.app/spaces/practical-visvesvaraya-957532/assets/2EOzIIqaMM7r20kfrACErb2hnja req-id=global
DBG store policy mrn=//policy.api.mondoo.app/policies/kubernetes-apps owner=//policy.api.mondoo.app req-id=global uid=
DBG invalidate policy cache policy=//policy.api.mondoo.app/policies/kubernetes-apps
DBG store policy mrn=//policy.api.mondoo.app/policies/mondoo-linux-security-baseline owner=//policy.api.mondoo.app req-id=global uid=
DBG invalidate policy cache policy=//policy.api.mondoo.app/policies/mondoo-linux-security-baseline
DBG store policy mrn=//policy.api.mondoo.app/policies/mondoo-macos-security-baseline owner=//policy.api.mondoo.app req-id=global uid=
DBG invalidate policy cache policy=//policy.api.mondoo.app/policies/mondoo-macos-security-baseline
DBG store policy mrn=//policy.api.mondoo.app/policies/mondoo-windows-security-baseline owner=//policy.api.mondoo.app req-id=global uid=
DBG invalidate policy cache policy=//policy.api.mondoo.app/policies/mondoo-windows-security-baseline
DBG store policy mrn=//policy.api.mondoo.app/spaces/practical-visvesvaraya-957532/policies/cis-apple-macos-12-0-benchmark-l1 owner=//captain.api.mondoo.app/spaces/practical-visvesvaraya-957532 req-id=global uid=
DBG invalidate policy cache policy=//policy.api.mondoo.app/spaces/practical-visvesvaraya-957532/policies/cis-apple-macos-12-0-benchmark-l1
DBG store policy mrn=//policy.api.mondoo.app/policies/mondoo-dns-baseline owner=//policy.api.mondoo.app req-id=global uid=
DBG invalidate policy cache policy=//policy.api.mondoo.app/policies/mondoo-dns-baseline
DBG store policy mrn=//policy.api.mondoo.app/policies/cis-amazon-eks-level-1 owner=//policy.api.mondoo.app req-id=global uid=
DBG invalidate policy cache policy=//policy.api.mondoo.app/policies/cis-amazon-eks-level-1
DBG store policy mrn=//policy.api.mondoo.app/policies/platform-eol owner=//policy.api.mondoo.app req-id=global uid=
DBG invalidate policy cache policy=//policy.api.mondoo.app/policies/platform-eol
DBG store policy mrn=//policy.api.mondoo.app/policies/asset-overview owner=//policy.api.mondoo.app req-id=global uid=
DBG invalidate policy cache policy=//policy.api.mondoo.app/policies/asset-overview
DBG store policy mrn=//policy.api.mondoo.app/policies/mondoo-aws-baseline owner=//policy.api.mondoo.app req-id=global uid=
DBG invalidate policy cache policy=//policy.api.mondoo.app/policies/mondoo-aws-baseline
DBG store policy mrn=//policy.api.mondoo.app/policies/platform-vulnerability owner=//policy.api.mondoo.app req-id=global uid=
DBG invalidate policy cache policy=//policy.api.mondoo.app/policies/platform-vulnerability
DBG store policy mrn=//policy.api.mondoo.app/policies/cis-kubernetes-level-1 owner=//policy.api.mondoo.app req-id=global uid=
DBG invalidate policy cache policy=//policy.api.mondoo.app/policies/cis-kubernetes-level-1
DBG store policy mrn=//captain.api.mondoo.app/spaces/practical-visvesvaraya-957532 owner=//policy.api.mondoo.app req-id=global uid=
DBG invalidate policy cache policy=//captain.api.mondoo.app/spaces/practical-visvesvaraya-957532
DBG store policy mrn=//assets.api.mondoo.app/spaces/practical-visvesvaraya-957532/assets/2EOzIIqaMM7r20kfrACErb2hnja owner=//captain.api.mondoo.app/spaces/practical-visvesvaraya-957532 req-id=global uid=
DBG invalidate policy cache policy=//assets.api.mondoo.app/spaces/practical-visvesvaraya-957532/assets/2EOzIIqaMM7r20kfrACErb2hnja
DBG marketplace> fetched policy bundle from upstream policy=//assets.api.mondoo.app/spaces/practical-visvesvaraya-957532/assets/2EOzIIqaMM7r20kfrACErb2hnja req-id=global
DBG client> got policy bundle
DBG client> got policy filters
DBG starting query execution qrid=ZB4B5C2LHO0=
DBG run command command="uname -s" provider=ssh
DBG run command command="uname -m" provider=ssh
→ use scp instead of sftp
DBG initialized ssh filesystem file-transfer=scpfs
DBG platform> cannot parse lsb config on this linux system error="failed to read scp message header: err=scp: /etc/lsb-release: No such file or directory\n"
DBG run command command="uname -m" provider=ssh
DBG run command command="uname -s" provider=ssh
DBG run command command="test -e /bin/busybox" provider=ssh
DBG run command command="stat -L /bin/busybox --printf '%s\n%f\n%u\n%g\n%X\n%Y\n%C'" provider=ssh
DBG could not parse file stat information path=/bin/busybox
DBG platform> we do not know the linux system, but we do our best in guessing
DBG platform> detected os family=["linux","unix","os"] platform=ubios
DBG PT786ExRhZswJLT/7pNdxqUTpBQO8am14XfU5rnWvNXGWCO5qQ4iK9lw8R9Ko4n6lUX4M5nUBtAmAAdWr4qlmg== finished
DBG JYVmcYJg895hFnTpkKUJUTZtfqqeM9yTB4193+q5/oX9WnT0lO7i3BFYDfg9v/uXVI8QTEVgoEJc9xfpp0IrYg== finished
DBG ttv3L9qepQVp0h6DHTu/9VLqjzLkuFgY+3FN/RawopxTjxAVd0zFBECK+U+Te26dmSx0tC5SPIrGYSnnN0R2Gg== finished
DBG K19jsTwWlK1ebD2dwcFAZCRnuwmm4EIqaz2X1ANtBclN3jffWYQf9zwC8sYjaoC+/ikeC0h/M6HRE6NCU7o/uQ== finished
DBG finished query execution qrid=ZB4B5C2LHO0=
DBG TAToVilql1FbQyKohlRbAq5K3BTj1UFi+R4IY1YvGOFApxPO27ogQ6ezgRmFZzzCCIYXSIHboZMK3PPE2FIlHw== finished
DBG starting query execution qrid=9zkS3teYP4Q=
DBG yUHOZ/pJzgQ3FLcnKAPphE4TgWqFptqPWA8GYl4e5Dqg0/YzQWcDml2cbrTEj3nj1rm0azm9povOYMRjTgSvZg== finished
DBG ph+xJ13L9rWnyxqrSjTNUt1C+BJdogBsCwpAYsX2bvfG6HaP1je65obm7jsx3FB/Vn9CTYbi491repoW8B1KNQ== finished
DBG EpnHIF31KeNgY/3Z4KyBuKHQ0kk/i+MyYbTX+ZWiQIAvK6lv4P2Nlf9CKAIrn2KOfCWICteI96BN1e8GA6sNZA== finished
DBG finished query execution qrid=9zkS3teYP4Q=
DBG 0XkZYWgnmPXgpTtYNpzJd3FZ0un8AdahxtbsfudPgM2BqRxjpGfgFv5aHNDxqDB7Es1Ok8xmjsZ7mJYqlw/7zw== finished
DBG starting query execution qrid=FXLvS1MK0FM=
DBG finished query execution qrid=FXLvS1MK0FM=
DBG starting query execution qrid=oTdPQFzPjZM=
DBG finished query execution qrid=oTdPQFzPjZM=
DBG starting query execution qrid=91oXv/HZs98=
DBG 4NypLBgCzGKp8+tbdT/gUJ+DJQnOLUjY5t/l+z7jlUS8qrVz5SKLbSR3MyTZC+D1Kz0qDqRaDmo3O/j6tp9AMQ== finished
DBG JToOi1f+kCtu3UmLVxqCPa1qOtMPOrLeAchLXIxZ7cSOy2JGns9sZNZ3fgN5pXYil2KvjODdHuoM0/I+Aang+g== finished
DBG finished query execution qrid=91oXv/HZs98=
DBG starting query execution qrid=xqECC5zypwY=
DBG finished query execution qrid=xqECC5zypwY=
DBG starting query execution qrid=pWyd4yKxb38=
DBG finished query execution qrid=pWyd4yKxb38=
DBG starting query execution qrid=ZJjc7U9LYqQ=
DBG finished query execution qrid=ZJjc7U9LYqQ=
DBG starting query execution qrid=LYI2Dm4MwkQ=
DBG finished query execution qrid=LYI2Dm4MwkQ=
DBG starting query execution qrid=APkQr5eqclQ=
DBG finished query execution qrid=APkQr5eqclQ=
DBG starting query execution qrid=2ZmNd5/fTAg=
DBG finished query execution qrid=2ZmNd5/fTAg=
DBG starting query execution qrid=NVBCj4YrLuA=
DBG finished query execution qrid=NVBCj4YrLuA=
DBG starting query execution qrid=NjgN0od6gI4=
DBG mCaw5JvXj3A1W+Lk0284EFNBrEp05+Jb2ZSB16BFHGCxuQ0zmwv8RZtF6xUR85tCU76nNboLsymjjkXmBpnXgQ== finished
DBG snnJ3BIhsHaDtAmtuZqvij3C98T7s5Og0EVrzwf0EMNeNPbEEKAVBJ6BHyKeZ+w8dicf88jo6Moo96Uymb/zLg== finished
DBG GDrHxkI7Z9TIsM9EhgzVWfI6mFlkF66kE3ohBEzvdjEuBZMERezzyVW3kDy+WBnra0xifTmDYsOZwMXW8GHtoQ== finished
DBG nt/RGL1ZAuxcExyktmOvkzCyFId991vMfjjk+5xvF3PCMKPSYS6N07y7GV45IEm4XqfAKpY5c/ubPBcDywJ8Xg== finished
DBG 9zjLAQG+sPNAc20wn9CUP8NNth0mOdTSfFS58QSI6pOe4/EjCBDF3SosLEU8BcVud3nEREGZ/HXCuAjLC4U56w== finished
DBG 1l4QRTSN+gDhOdmznR3JZh9GH1a2bx+rjrNHGmhR6d5COB+mGRCIZWhVPG38r7HmHFvrKyM9k5TwJRtDBcWL1g== finished
DBG StH9D0PPjpkoG7QzoGoqCzVlXjsq3gad1UPUZTdHk08Vv4nlxRgixyUSufg6hCdTURfgMB0ucY0x8Vy7bfct/w== finished
DBG ICFoTrO74Kx+rEInrC3/Wwkio9xffoPvHEvah/gJM3DD6GhOLW1OGzDXTKSqO8fLwP2z4Vw3IJVLjRBN2V+nDA== finished
DBG ePxlMbxYHKN1kBQDK8Xokw+BsyY6rBox0oh5qeZ4X9qU4z8wq1pEXTA9ExNoMTKvTA0IKlKMYLDi8z9Oc64sKQ== finished
DBG FVGpJ7glutTEWLQgt6u09EByfph6v+Zy5gGYRiDpjcDFuXQT8g03SZcs0Ogwlh/65ATZq7NVBKnZsEpK++vNuA== finished
DBG 5liZQnwN5ZpXFheZ3xlwiJ7m1s+TaeHIUIbSlfhV6MnEFY5j18RE8GU5pm4/NGbFlWxlM2uG7XQ1KvLrgLu9Bw== finished
DBG EaCUBrr/niA4boc/Vv2vw2IIzvy6CaNxRdDN479isDj0z37YXwgcMbhG+freKAH5Q50oHyUFWPtG93jf96WT1A== finished
DBG nQ1dGapXrASBrPyNTPr01+xXAWsX7wmsN1bAaQduZ17rRiwj29u0oD/fCN0HM4SqBbLyyRuL0RRMXE1wCr0hHA== finished
DBG 13VXYfnMnc74H8XVgiMbH6ZSHxTGQxkhJfUkIiYOBCfUDxHAIJWopMcsea7hXkBTFpbM9lCDnbDBev1z+uagBw== finished
DBG run command command="ps axo pid,pcpu,pmem,vsz,rss,tty,stat,stime,time,uid,command" provider=ssh
DBG finished query execution qrid=NjgN0od6gI4=
panic: runtime error: index out of range [1] with length 0
goroutine 74 [running]:
go.mondoo.com/cnquery/resources/packs/core/processes.ParseLinuxPsResult({0x10cf0cbc0?, 0x14002116f30?})
/go/pkg/mod/go.mondoo.com/[email protected]/resources/packs/core/processes/unixps.go:57 +0x528
go.mondoo.com/cnquery/resources/packs/core/processes.(*UnixProcessManager).List(0x14000b68ab0)
/go/pkg/mod/go.mondoo.com/[email protected]/resources/packs/core/processes/unixps.go:153 +0x108
go.mondoo.com/cnquery/resources/packs/core.(*mqlProcesses).GetList(0x14000309c98)
/go/pkg/mod/go.mondoo.com/[email protected]/resources/packs/core/processes.go:175 +0x58
go.mondoo.com/cnquery/resources/packs/core.(*mqlProcesses).ComputeList(0x14000309c98)
/go/pkg/mod/go.mondoo.com/[email protected]/resources/packs/core/core.lr.go:8780 +0x4c
go.mondoo.com/cnquery/resources/packs/core.(*mqlProcesses).Compute(0x140011c7a80?, {0x10a6670a9, 0x4})
/go/pkg/mod/go.mondoo.com/[email protected]/resources/packs/core/core.lr.go:8768 +0x94
go.mondoo.com/cnquery/resources.(*Runtime).WatchAndUpdate(0x14001bd36e0, {0x10cfbb0c0, 0x14000309c98}, {0x10a6670a9, 0x4}, {0x140013dcb40, 0x26}, 0x14002116de0)
/go/pkg/mod/go.mondoo.com/[email protected]/resources/runtime.go:284 +0x2dc
go.mondoo.com/cnquery/llx.runResourceFunctionV1(0x14001c11540, 0x14002116cf0, 0x14001b21ea0, 0x2)
/go/pkg/mod/go.mondoo.com/[email protected]/llx/builtin_v1.go:691 +0x1d4
go.mondoo.com/cnquery/llx.(*MQLExecutorV1).runBoundFunctionV1(0x14001c11540, 0x14002116cf0, 0x14001b21ea0, 0x4f59ea0?)
/go/pkg/mod/go.mondoo.com/[email protected]/llx/builtin_v1.go:774 +0xf8
go.mondoo.com/cnquery/llx.(*MQLExecutorV1).runFunction(0x14001c11540, 0x14001b21ea0, 0x125e0a68?)
/go/pkg/mod/go.mondoo.com/[email protected]/llx/llx_v1.go:469 +0x10c
go.mondoo.com/cnquery/llx.(*MQLExecutorV1).runChunk(0x14001c11540, 0x14001c5fa34?, 0x1c5fa38?)
/go/pkg/mod/go.mondoo.com/[email protected]/llx/llx_v1.go:486 +0x260
go.mondoo.com/cnquery/llx.(*MQLExecutorV1).runRef(0x14002116030?, 0xbf69700?)
/go/pkg/mod/go.mondoo.com/[email protected]/llx/llx_v1.go:511 +0xd8
go.mondoo.com/cnquery/llx.(*MQLExecutorV1).runChain(0x14001c11540, 0xa6779da?)
/go/pkg/mod/go.mondoo.com/[email protected]/llx/llx_v1.go:543 +0xb4
go.mondoo.com/cnquery/llx.(*MQLExecutorV1).Run(0x14001c11540)
/go/pkg/mod/go.mondoo.com/[email protected]/llx/llx_v1.go:189 +0x1c0
go.mondoo.com/cnquery/llx.RunV1(0x14001fea000?, 0x10a73f8c6?, 0x18?, 0x1400161d6e0?)
/go/pkg/mod/go.mondoo.com/[email protected]/llx/run_v1.go:15 +0x38
go.mondoo.io/mondoo/policy/executor/internal.(*executionManager).executeCodeBundle(0x14001b82410, 0x14001b28140, 0x14000c57ec8?, {0x0, 0x0})
/builds/mondoolabs/mondoo/policy/executor/internal/execution_manager.go:177 +0x360
go.mondoo.io/mondoo/policy/executor/internal.(*executionManager).Start.func1()
/builds/mondoolabs/mondoo/policy/executor/internal/execution_manager.go:87 +0x15c
created by go.mondoo.io/mondoo/policy/executor/internal.(*executionManager).Start
/builds/mondoolabs/mondoo/policy/executor/internal/execution_manager.go:56 +0x74
Describe the bug
There is markdown in the help output for cnquery completion bash -h
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Clear formatting for the shell without any markdown.
Screenshots
Generate the autocompletion script for the bash shell.
This script depends on the 'bash-completion' package.
If it is not installed already, you can install it via your OS's package manager.
To load completions in your current shell session:
source <(cnquery completion bash)
To load completions for every new session, execute once:
#### Linux:
cnquery completion bash > /etc/bash_completion.d/cnquery
#### macOS:
cnquery completion bash > $(brew --prefix)/etc/bash_completion.d/cnquery
You will need to start a new shell for this setup to take effect.
Usage:
cnquery completion bash
Flags:
-h, --help help for bash
--no-descriptions disable completion descriptions
Global Flags:
--config string Set config file path (default $HOME/.config/mondoo/mondoo.yml)
--log-level string Set log level: error, warn, info, debug, trace (default "info")
-v, --verbose Enable verbose output
Desktop (please complete the following information):
We should encourage good issues and pull requests by adding templates to this repo.
What is not working as you expected it?
When you run cnquery scan
with no target you get the following...
cnquery scan
→ no configuration file provided
! No credentials provided. Switching to --incogito mode.
→ discover related assets for 0 asset(s)
→ resolved assets resolved-assets=0
FTL failed to run scan error="could not find an asset that we can connect to"
Where on the platform does it happen?
cnquery
binary on a local system
How do we replicate the issue?
run cnquery scan
with no target
Expected behavior (i.e. solution)
when a user runs cnquery scan
with no target I would expect a useful error message that we expected a target, but no target was specified and then print the help for cnquery scan
Other Comments
https://lede.readthedocs.io/en/latest/
This is what new Ubiquiti APs run:
uname -s
Linux
uname -m
aarch64
uname -r
4.4.198
os-release:
NAME="LEDE"
VERSION="17.01.6, Reboot"
ID="lede"
ID_LIKE="lede openwrt"
PRETTY_NAME="LEDE Reboot 17.01.6"
VERSION_ID="17.01.6"
HOME_URL="http://lede-project.org/"
BUG_URL="http://bugs.lede-project.org/"
SUPPORT_URL="http://forum.lede-project.org/"
BUILD_ID="r3979-2252731af4"
LEDE_BOARD="mtk/mt7622"
LEDE_ARCH="aarch64_cortex-a53_neon-vfpv4"
LEDE_TAINTS="no-all mklibs busybox"
LEDE_DEVICE_MANUFACTURER="LEDE"
LEDE_DEVICE_MANUFACTURER_URL="http://lede-project.org/"
LEDE_DEVICE_PRODUCT="Generic"
LEDE_DEVICE_REVISION="v0"
LEDE_RELEASE="LEDE Reboot 17.01.6 r3979-2252731af4"
register Registers Client with Mondoo Platform
unregister Unregister Client from Mondoo Platform
This is a leftover from Mondoo with the capital Client.
I am trying to query all of my GCP compute instances but I am getting different results from the gcloud
cli and cnquery
:
gcloud compute instances list --filter="status:running"
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
instance-edge us-central1-a e2-medium 10.128.15.234 x.x.x.x RUNNING
windows-gitlab us-central1-a e2-medium 10.128.15.229 x.x.x.x RUNNING
terraform-instance us-central1-c f1-micro 10.128.0.2 x.x.x.x RUNNING
gcloud.compute.instances.where( status == "RUNNING" ) { name }
gcloud.compute.instances.where: [
0: {
name: "instance-edge"
}
1: {
name: "windows-gitlab"
}
]
Describe the bug
Trying out the different output formats with cnquery scan
and it appears output to csv
and yaml
are not working...
cnquery scan aws -f core/mondoo-aws-incident-response.mql.yaml --output csv > /tmp/aws-incident.csv
→ no configuration file provided
! Scanning with local bundles will switch into --incognito mode by default. Your results will not be sent upstream.
→ discover related assets for 1 asset(s)
→ resolved assets resolved-assets=1
→ connecting to asset AWS Account lunalectric-management (177043759486) (api)
FTL failed to print error="unknown reporter type, don't recognize this Format"
cnquery scan aws -f core/mondoo-aws-incident-response.mql.yaml --output yaml > /tmp/aws-incident.yaml
→ no configuration file provided
! Scanning with local bundles will switch into --incognito mode by default. Your results will not be sent upstream.
→ discover related assets for 1 asset(s)
→ resolved assets resolved-assets=1
→ connecting to asset AWS Account lunalectric-management (177043759486) (api)
FTL failed to print error="unknown reporter type, don't recognize this Format"
To Reproduce
Steps to reproduce the behavior:
run cnquery scan <target> -f <pack> --output < csv || yaml >
Expected behavior
A clear and concise description of what you expected to happen.
Desktop (please complete the following information):
cnquery version
cnquery 7.1.0 (13a4a7b6, 2022-10-24T20:50:08Z)
Describe the bug
If you use the install script and already have cnquery installed it says it's updating Mondoo for you.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
All references should be cnquery
Screenshots
* Mondoo cnquery is already installed. Updating Mondoo...
* Upgrade Mondoo cnquery via 'brew upgrade'
Running `brew update --auto-update`...
Desktop (please complete the following information):
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
cnquery shell k8s
k8s.
and tag to get the autocompleteExpected behavior
admissionreview should have a description
Desktop (please complete the following information):
N/A
Additional context
N/A
Describe the bug
When scanning a manifest file it still gets the way back platform of kubernetes
which is not specific and potentially causes confusion when we list out all the other platforms within. This should probably be something like k8s-manifest
To Reproduce
Steps to reproduce the behavior:
cnquery shell k8s ~/dev/example.yml
K8S Manifest dev (kubernetes)
Expected behavior
A manifest specific asset platform.
Desktop (please complete the following information):
N/A
Additional context
N/A
Describe the bug
If I run commands such as cnquery run
or cnquery shell
with the --config
flag set to an invalid path I don't get any sort of warning. An invalid config error is raised when running mondoo status
To Reproduce
Steps to reproduce the behavior:
cnquery --config /bogus shell local
! could not load configuration file /bogus
Expected behavior
All commands that take a config should throw a warning or error if the config is not valid.
Screenshots
N/A
Desktop (please complete the following information):
Running cnquery on local returns a process list:
DEBUG=1 cnquery run local --sudo -c 'processes.list.length'
...
processes.list.length: 404
Adding --record
to the command results in an empty list:
DEBUG=1 cnquery run local --record --sudo -c 'processes.list.length'
...
processes.list.length: 0
Describe the bug
The ascii art banner is hard to read, especially on a dark terminal. We need to switch to a better font.
To Reproduce
Steps to reproduce the behavior:
cnquery -h
Expected behavior
Easy to read banner name
Desktop (please complete the following information):
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.