Git Product home page Git Product logo

diagnostic-collection's Introduction

DataStax Diagnostic Collector for Apache Cassandra™ and DataStax Enterprise (DSE) ™

A script for collecting a diagnostic snapshot from each node in a Cassandra based cluster.

The code for the collector script is in the ds-collector/ directory. It must first be built into a collector tarball.

This collector tarball is then extracted onto a bastion or jumpbox that has access to the nodes in the cluster. Once extracted, the configuration file (collector.conf) can be edited to match any cluster deployment customisations (e.g. non-default port numbers, non-default log location, etc). The ds-collector script can then be executed; first in test mode and then in collection mode.

Pre-configuring the Collector Configuration

When building the collector, it can be instructed to pre-configure the collector.conf by setting the following variables:

# If the target cluster being collected is DataStax Enterprise, please set is_dse=true, otherwise it will assume Apache Cassandra.
export is_dse=true
# If the target cluster is running on docker containers, please set is_docker=true, this will result in the script issuing commands via docker and not ssh.
export is_docker=true
# If the target cluster is running on k8s, please set is_k8s=true, this will result in the script issueing commands via kubectl and not ssh.
export is_k8s=true

If no variables are set, then the collector will be pre-configured to assume Apache Cassandra running on hosts which can be accessed via SSH.

Building the Collector

Build the collector using the following make command syntax. You will need make and Docker.

# The ISSUE variable is typically a JIRA ID, but can be any unique label
export ISSUE=<JIRA_ID>
make

This will generate a .tar.gz tarball with the issueId set in the packaged configuration file. The archive will named in the format ds-collector.$ISSUE.tar.gz.

Building the Collector with automatic s3 upload ability

If the collector is built with the following variables defined, all collected diagnostic snapshots will be encrypted and uploaded to a specific AWS S3 bucket. Encryption will use a one-off built encryption key that is created locally.

export ISSUE=<JIRA_ID>
# AWS Key and secret for S3 bucket, where the diagnostic snapshots will be uploaded to
export COLLECTOR_S3_BUCKET=yourBucket
export COLLECTOR_S3_AWS_KEY=yourKey
export COLLECTOR_S3_AWS_SECRET=yourSecret
make

To use this feature you will need the aws-cli and openssl installed on your local machine as well.

This will then generate a .tar.gz tarball as described above, additionally with the AWS credentials set in the packaged configuration file, and the bucket name set within the ds-collector script.

In addition to the .tar.gz tarball, an encryption key is now generated. The encryption key must be placed in the same directory as the extracted collector tarball for it to execute. If the tarball is being sent to someone else, it is recommeneded to send the encryption key via a different (and preferably secured) medium.

Storing Encryption keys within the AWS Secrets Manager

The collector build process also supports storing and retrieving keys from the AWS secrets manager, to use this feature, 2 additional environment variables must be provided before the script is run.

export ISSUE=<JIRA_ID>
# AWS Key and secret for S3 bucket, where the diagnostic snapshots will be uploaded to
export COLLECTOR_S3_BUCKET=yourBucket
export COLLECTOR_S3_AWS_KEY=yourKey
export COLLECTOR_S3_AWS_SECRET=yourSecret
# AWS Key and secret for Secrets Manager, where the one-off build-specific encryption key will be stored
export COLLECTOR_SECRETSMANAGER_KEY=anotherKey
export COLLECTOR_SECRETSMANAGER_SECRET=anotherSecret
make

When the collector is built, it will also upload the generated encryption key to the Secrets Manager, as defined by the COLLECTOR_SECRETSMANAGER_* variables.

Please be careful with the encryption keys. They should only be stored in a secure vault (such as the AWS Secrets Manager), and temporarily on the jumpbox or bastion where and while the collector script is being executed. The encryption key ensures the diagnostic snapshots are secured when transferred over the network and stored in the AWS S3 bucket.

Executing the Collector Script against a Cluster

Instructions for execution of the Collector script are found in ds-collector/README.md. These instructions are also bundled into the built collector tarball.

diagnostic-collection's People

Contributors

adejanovski avatar alexott avatar andrewhogg avatar ben-dse avatar brendancicchi avatar jmoses-ds avatar joelsdc avatar michaelsembwever avatar mieslep avatar msmygit avatar ossarga avatar romainanselin avatar rzvoncek avatar rzvoncek-ds avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

diagnostic-collection's Issues

Timeout value should be configurable

For larger jmx metrics collections, the timeout of 120 seconds is too short resulting in files truncated mid json. It would be useful to make the timeout configurable so that we can give it more time on busier / larger nodes.

Rediscover `clusterName` when performing manual `-a` upload

When using the -a option to separately upload files to s3, we lose the clusterName variable. This variable is used in the s3 destination folder name we upload to. This leaves us open diagnostic snapshots from multiple clusters being uploaded into one s3 folder (which we do not want).

JMX hostname / jmx port is not discovered on a per node basis

The jmx hostname is defaulted as 127.0.0.1 while the port is set globally. In scenarios where the jmx port changes per node, or the host name changes (docker containers for example), this results in the JMX either not being collected, or collected from the wrong location.

Parsing connection port fails whit whitespace after port number

When parsing the connection port, the config name/option is included because the sed rege does not match:

# grep -e '^native_transport_port: ' "$CONF_DIR/cassandra.yaml" | sed -e 's|^[^:]*:[ ]*\([^ ]*\)$|\1|' | tr -d "'"
native_transport_port: 9042

Solution: allow whitespace after port number with \w*:

# grep -e '^native_transport_port: ' "$CONF_DIR/cassandra.yaml" | sed -e 's|^[^:]*:[ ]*\([^ ]*\)\w*$|\1|' | tr -d "'"
9042

I noticed the parsing is always this strict, eg when parsing IP adresses.

df output can wrap breaking the disk space checks

The output of df can wrap over multiple lines if device names are long. This breaks the disk space checks here and here.

For example

$ df -h /tmp/datastax
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_root-lv_tmp
                      6.0G  761M  4.9G  14% /tmp

The use of df's --portability option could be helpful, but it is not available on macos.

interactive tool that generates a validated .conf file

Certain common problems in the configuration could be avoided if it were clearer what the problem was. To this end, this feature proposes an interactive tool that is able to generate a valid configuration which:

  1. Gets correct ssh username/password, command-line options
  2. Determines if sudo is needed
  3. Confirms critical commands are available (e.g. cqlsh, nodetool, java) and creates appropriate PATH settings
  4. Gets correct cqlsh username/password/host, command-line options
  5. Confirms S3 upload configuration including encryption and whether or not the files should be uploaded

Search Indexes on columns outside of solr_query causes exception.

When we encounter a schema file which contains entries similar to this:
CREATE CUSTOM INDEX index_name_1 ON ks1.tbl1 (solr_query) USING 'com.datastax.bdp.search.solr.Cql3SolrSecondaryIndex';
CREATE CUSTOM INDEX index_name_2 ON ks1.tbl1 (field1) USING 'com.datastax.bdp.search.solr.Cql3SolrSecondaryIndex';

The solr_query entry is parsed and the core collected, but the 2nd entry results in it attempting to collect cores for CREATE, CUSTOM, INDEX, etc and then finally tries to collect a core for 'com.datastax.bdp.search.solr.Cql3SolrSecondaryIndex' - at which point the single quotes around the name trip the CQL up sufficiently that it raises a python exception in cqlsh.

The issue comes from line 822 : for core in $(grep -e 'CREATE CUSTOM INDEX.Cql3SolrSecondaryIndex' "$DATA_DIR/driver/schema" 2>/dev/null|sed -e 's|^. ON ([^ ]) (solr_query).$|\1|'|tr -d '"'); do

This assumes that the solr_query column index exists and none other - pre 5.x DSE Search indexes are displayed with individual fields. To handle both, we need to extract both and then uniq to get the core to grab the metadata of.

cfhistograms/tablehistograms not working

The file ends up with the following error:

nodetool: tablehistograms requires keyspace and table name arguments
See 'nodetool help' or 'nodetool help <command>'.

We do get the information from scraping jmx metrics, so we should at least stop trying to call the cfhistograms.

jmcd error when pulling java metrics when not running as Cassandra user

ds-collector v2.0.2:

I've noticed the following error:

	executing `jcmd 8890 VM.system_properties > java_system_properties.txt`… com.sun.tools.attach.AttachNotSupportedException: Unable to open socket file: target process not responding or HotSpot VM not loaded
	at sun.tools.attach.LinuxVirtualMachine.<init>(LinuxVirtualMachine.java:106)
	at sun.tools.attach.LinuxAttachProvider.attachVirtualMachine(LinuxAttachProvider.java:63)
	at com.sun.tools.attach.VirtualMachine.attach(VirtualMachine.java:208)
	at sun.tools.jcmd.JCmd.executeCommandForPid(JCmd.java:147)
	at sun.tools.jcmd.JCmd.main(JCmd.java:131)
failed
	executing `jcmd 8890 VM.command_line > java_command_line.txt`… com.sun.tools.attach.AttachNotSupportedException: Unable to open socket file: target process not responding or HotSpot VM not loaded
	at sun.tools.attach.LinuxVirtualMachine.<init>(LinuxVirtualMachine.java:106)
	at sun.tools.attach.LinuxAttachProvider.attachVirtualMachine(LinuxAttachProvider.java:63)
	at com.sun.tools.attach.VirtualMachine.attach(VirtualMachine.java:208)
	at sun.tools.jcmd.JCmd.executeCommandForPid(JCmd.java:147)
	at sun.tools.jcmd.JCmd.main(JCmd.java:131)
failed

The issue here is running jcmd with a different user that the one owning the process. In this case, root is running the collector and cassandra is running the service, therefor it should be cassandra who runs jcmd instead of root.

As a workaround I have added sudo -u cassandra to the jcmd entries here:

https://github.com/datastax/diagnostic-collection/blob/master/ds-collector/rust-commands/collect-info.rs#L919-L940

This workaround is ugly at best. 😂

As the collector already handles finding the Cassandra PID to run jcmd, one better approach would be to run something like ps -o user= -p${cassandra_pid} once we have the ${cassandra_pid} to get the specific user running Cassandra, and then doing a proper sudo -u ${cassandra_pid_owner} jcmd ... the command doesn't fail.

I'm not sure what the best approach code-wise is for this one, I think the changes belong more in the rust side of the collector and I get lost there.

Retry curl uploads

Uploads to S3 can also fail in some cases due to latency. This was remediated by simple bash for loop which just restarted the curl command if it failed.

when encrypt_uploads, verify md5 of .key file and attempt to fix before failing

A number of .enc artifacts have been uploaded but have been unable to be decrypted. It appears a common failure is added DOS line endings \r. To protect against this a Reuben enhancement is requested to add an md5 checksum of the .key file.

If encrypted uploads to S3 are enabled, verify that the .key file is as expected. If it is not, attempt to get it to match by stripping \r from the file. If it still does match, fail with a clear explanation.

Package installation detection failure

The detection of packaged installation for both OSS C* and DSE fails.

# COSS package install
       if [ -z "$ROOT_DIR" ] && [ -d "/etc/cassandra" ] && [ -d "/usr/share/cassandra" ]; then
           IS_PACKAGE="true"
... ...
# DSE install
   elif [ "$TYPE" == "dse" ]; then
       IS_DSE="true"
       # DSE package install
       debug "DSE install: Checking install type..."
       if [ -z "$ROOT_DIR" ] && [ -d "/etc/dse" ] && [ -f "/etc/default/dse" ] && [ -d "/usr/share/dse/" ]; then
           IS_PACKAGE="true"

In the first if condition ([ -z "$ROOT_DIR" ]), "-z" should be "-d".

File with hosts isn't specified, or doesn't exist - despite providing the file in cmd

Hi,
I'm running such a command:
./collect_diag.sh -t "dse" -f /home/ec2-user/devops/repos/diagnostic-collection/dse_nodes -o /mnt/shared/dse_diagnostics/20210614_1440

And for some strange reason getting:
File with hosts isn't specified, or doesn't exist, using 'nodetool status'

I added some debug options and this is what I see (added set -eux)

./collect_diag.sh -t "dse" -f /home/ec2-user/devops/repos/diagnostic-collection/dse_nodes -o /mnt/shared/dse_diagnostics/20210614_1440
+ '[' 4 -lt 4 ']'
+ NT_OPTS=
+ CQLSH_OPTS=
+ DT_OPTS=
++ pwd
+ OLDWD=/home/ec2-user/devops/repos/diagnostic-collection
++ mktemp -d
+ OUT_DIR=/tmp/tmp.aNy02q0NF7
+ TIMEOUT=600
+ HOST_FILE=
+ SSH_OPTS=
+ NT_OPTS=
+ COLLECT_OPTS=
+ REMOVE_OPTS=
+ INSIGHT_COLLECT_OPTS=
+ VERBOSE=
+ TYPE=
+ ENCRYPTION_KEY=
+ TICKET=
+ S3_BUCKET=
+ DSE_DDAC_ROOT=
+ getopts :hzivrk:c:n:d:f:o:p:s:t:u:I:m:e:S:K:T:B:P: opt
+ case $opt in
+ TYPE=dse
+ getopts :hzivrk:c:n:d:f:o:p:s:t:u:I:m:e:S:K:T:B:P: opt
+ case $opt in
+ HOST_FILE= /home/ec2-user/devops/repos/diagnostic-collection/dse_nodes
+ getopts :hzivrk:c:n:d:f:o:p:s:t:u:I:m:e:S:K:T:B:P: opt
+ case $opt in
+ OUT_DIR= /mnt/shared/dse_diagnostics/20210614_1440
+ getopts :hzivrk:c:n:d:f:o:p:s:t:u:I:m:e:S:K:T:B:P: opt
+ shift 4
+ echo 'Using output directory:  /mnt/shared/dse_diagnostics/20210614_1440'
Using output directory:  /mnt/shared/dse_diagnostics/20210614_1440
+ check_type
+ '[' dse '!=' ddac ']'
+ '[' dse '!=' coss ']'
+ '[' dse '!=' dse ']'
+ '[' dse = ddac ']'
+ TMP_HOST_FILE=
+ echo 'HOST_FILE:  /home/ec2-user/devops/repos/diagnostic-collection/dse_nodes'
HOST_FILE:  /home/ec2-user/devops/repos/diagnostic-collection/dse_nodes
+ '[' -z  /home/ec2-user/devops/repos/diagnostic-collection/dse_nodes ']'
+ '[' '!' -f  /home/ec2-user/devops/repos/diagnostic-collection/dse_nodes ']'
+ echo 'File with hosts isn'\''t specified, or doesn'\''t exist, using '\''nodetool status'\'''
File with hosts isn't specified, or doesn't exist, using 'nodetool status'
+ TMP_HOST_FILE= /mnt/shared/dse_diagnostics/20210614_1440/diag-hosts.2116
+ nodetool status
./collect_diag.sh: line 198: nodetool: command not found
+ grep -e '^UN'
+ sed -e 's|^UN [ ]*\([^ ]*\) .*$|\1|'
./collect_diag.sh: line 198:  /mnt/shared/dse_diagnostics/20210614_1440/diag-hosts.2116: No such file or directory

while:

cat /home/ec2-user/devops/repos/diagnostic-collection/dse_nodes
cassandra-1.
cassandra-2
cassandra-3
cassandra-4

Checked the script, conditional looks pretty legit, file permissions also:

ls -alh /home/ec2-user/devops/repos/diagnostic-collection/dse_nodes
-rw-rw-r-- 1 ec2-user ec2-user 124 Jun 14 12:50 /home/ec2-user/devops/repos/diagnostic-collection/dse_nodes

running this as a ec2-user. Tried with bash, zsh - no luck.

Missing credentialized "dsetool" and "dse client-tool" commands

Seems like the tool does not pass the cqlsh credentials needed to activate dsetool in secured clusters. Similarly, the "dse" command has some subcommands that require security credentials. I encountered this after setting "is_dse=true" along with the cqlsh and jmx credentials parameters being set in collector.conf and activating a collection against a cluster using authentication. The "dsetool" calls are all failing with authentication errors complaining about the need for credentials.

Cache results from docker ps command

When there are a large number of nodes, the loop for translate_ipaddresses_to_docker_container_ids runs 2 for loops inside each other:
for host in ${cassandraNodes} ; do
for container_id in $(docker ps -q) ; do

This results in the docker ps command being run number_of_nodes * number_of_containers_on_host, which is slow / inefficient.

list_cassandra_nodes does not use the path variable when executing nodetool to discover the node list.

Within the configuration addPath / prependPath is used to ensure nodetool is on the path for the calling command - however if the nodes are specified via a single contact node and not a list of nodes then it attempts to auto-discover the cluster IPs. This is done in the list_cassandra_nodes() function. The function itself does not use the path configuration, and alludes to this. As a result, the liklihood is that if you needed to specify a path in the configuration to access nodetool, then it would also be needed for this part of the process.

# shellcheck disable=SC2086
list_cassandra_nodes() {
  # change this if there's an alias, or a full path needs to be specified
  nodetoolCmd="nodetool"

It should use the same pathing it's default approach, so that when the path is configured (because nodetool is not on the path) it doesn't fail with a nodetool not found from the this initial step.

Add an /upload switch

Just brainstorming but it'd be pretty amazing to have the diagnostic be automatically posted to some location to cut out a step for users.

nodetool options are being ignored

$ ./collect_diag.sh -n "-u cassandra -pw cassandra" -t dse
Using output directory: /tmp/tmp.gZsxsyrvhg
File with hosts isn't specified, or doesn't exist, using 'nodetool status'
error: Authentication failed! Credentials required
-- StackTrace --
java.lang.SecurityException: Authentication failed! Credentials required
	at com.sun.jmx.remote.security.JMXPluggableAuthenticator.authenticationFailure(JMXPluggableAuthenticator.java:211)

Collect the output of env

Our internal tooling expects the output of env being present in the collected artefacts. This toolkit seems to be missing it.

trap console to file, masking all sensitive data

when debugging it is common to ask for the set -x change and to upload the console output.

the console output isn't always re-attainable, and can contain sensitive information.

is it possible to trap it to a file, mask out sensitive info, so it's ready and simple to share …?

collect_node_diag trips over comments in cassandra.yaml

DSE is shipped with a cassandra.conf that has this line:

broadcast_address: # Leave unset or clear...

We had left the comment to remind ourselves that empty is a valid option. But as it turns out, collect_node_diag.sh trips over it:

Collecting data from node broadcast_address: # Leave unset or clear...
Can't execute cqlsh command, exit code: 1. If you're have cluster with authentication,
please pass the option -c with user name/password and other options, like:
-c '-u username -p password'
If you have SSL enabled for client connections, pass --ssl in -c

No data is collected. This is a similar but different to issue #15. The system is a fully up-to-date RHEL7 with sed 4.2.2-7 in case the suspicion goes there.

The workaround is trivial, removing these comments solves it fully.

os/env.txt can leak credentials

os/env.txt contains the output of env command to store environmental variables.

However, ds-collector sets cqlsh/nodetool credentials as environment variables, these values can appear in the file as well.

% cat os/env.txt | grep -i -e pass -e pw
jmxPassword=xxxxxxx
PWD=/home/zzz
cqlshPassword=xxx
nodetoolCredentials=-u ops -pw xxxxxxx
cqlshOpts= --username=nosql_ops --password=xxx

Consider skipping output of these variables.

Have a global flag for root-level gathering

It would be nice to have a settings flag that defaults to true for gathering information requiring root permissions. Some organizations don't allow production root access for scripts and it would be nice to still gather what we can in those cases.

OS information collected is not rich enough

The OS information collected is based on uname to get the kernel name, release etc but does not indicate which OS this is for (RHEL, Fedora etc). We can add in a call to cat /etc/*-release > "$DATA_DIR/os.txt" to obtain this.

collect-info binary missing

The README.md mentions collect-info should be (as binary) present in the download.
For me (downloading the 2.0.2 zip release) it is not. Is something going wrong here?

I ran into some other issues as well for which I'll try to create PRs later. Thanks.

Finding *-Statistics.db in multiple 'data_file_directories' locations fails

ds-collect latest (v2.0.2) / Ubuntu 16

when running the collector I’ve noticed in the logs the following error:

	executing `find /storage/cassandra/data,/storage2/cassandra/data,/storage3/cassandra/data -maxdepth 3 -name *-Statistics.db -exec cp --parents {} /tmp/datastax/20db1.lax1.gogii.net_artifacts_2022_04_21_1232_1650569548/sstable-statistics/ ; > `… find: ‘/storage/cassandra/data,/storage2/cassandra/data,/storage3/cassandra/data’: No such file or directory
failed

It seems like an find is trying to search in the one unique path called /storage/cassandra/data,/storage2/cassandra/data,/storage3/cassandra/data instead of 3 different paths.

I tried manually searching for Statistic files and we have plenty:

root@20db1:~# find /storage/cassandra/data -maxdepth 3 -name *-Statistics.db | wc -l
2400
root@20db1:~# find /storage2/cassandra/data -maxdepth 3 -name *-Statistics.db | wc -l
2266
root@20db1:~# find /storage3/cassandra/data -maxdepth 3 -name *-Statistics.db | wc -l
2401
root@20db1:~#

If I run the find the same way the collector does, it fails:

root@20db1:~# find /storage/cassandra/data,/storage2/cassandra/data,/storage3/cassandra/data -maxdepth 3 -name *-Statistics.db
find: ‘/storage/cassandra/data,/storage2/cassandra/data,/storage3/cassandra/data’: No such file or directory
root@20db1:~#

I believe the issue is with the find cmd concatenating multiple search paths, you must do that without concatenating:

root@20db1:~# find /storage/cassandra/data /storage2/cassandra/data /storage3/cassandra/data -maxdepth 3 -name *-Statistics.db | wc -l
7063
root@20db1:~#

I've narrowed down the issue to:

cassandra_data_dir=$(sed -n '/^data_file_directories:/,/^[^- ]/{//!p;};/^data_file_directories:/d' "$configHome/cassandra.yaml" | grep -e "^[ ]*-" | sed -e "s/^.*- *//" | tr $'\n' ',' | sed -e "s/.$/\n/")

The fix for that specific line to have a result of 3 different paths instead of one for $cassandra_data_dir would be to replace:
tr $'\n' ','
with
tr $'\n' ' '

    cassandra_data_dir=$(sed -n '/^data_file_directories:/,/^[^- ]/{//!p;};/^data_file_directories:/d' "$configHome/cassandra.yaml" | grep -e "^[ ]*-" | sed -e "s/^.*- *//" | tr $'\n' ' ' | sed -e "s/.$/\n/")

The problem with that fix is that for sure it's going to break things further down, as later some other parsing does expect the ',' there to continue working correctly... so changes are required in multiple places or add new code to address this.

Thanks!
Joel.

k8s line endings have CR

In determining the cluster name, it was discovered that a rogue \r was getting appended. Root cause:

Defaulted container "cassandra" out of: cassandra, server-system-logger, server-config-init (init)
cassandra@cluster1-dc1-default-sts-0:/$ nodetool -h 127.0.0.1 -p 7199   describecluster > /tmp/d.out
cassandra@cluster1-dc1-default-sts-0:/$ exit
phil.miesle@cc-dc-mck:~/collector/collector$ kubectl -n cass-operator cp cluster1-dc1-default-sts-0:tmp/d.out d.out
Defaulted container "cassandra" out of: cassandra, server-system-logger, server-config-init (init)
phil.miesle@cc-dc-mck:~/collector/collector$ file d.out
d.out: ASCII text
phil.miesle@cc-dc-mck:~/collector/collector$ kubectl -n cass-operator exec -ti cluster1-dc1-default-sts-0 -- /bin/bash -c 'nodetool -h 127.0.0.1 -p 7199   describecluster' > d2.out
Defaulted container "cassandra" out of: cassandra, server-system-logger, server-config-init (init)
phil.miesle@cc-dc-mck:~/collector/collector$ file d2.out
d2.out: ASCII text, with CRLF line terminators```

Cassandra passwords with spaces

How do you escape a C* password that contains spaces. For example, here is a command that assumes the default username nd password for C*:

./collect_node_diag.sh -t coss -f output.tar.gz -c "-u cassandra -p cassandra"

But what if the password contains spaces? For example if the password was pass word?

Can I use a backslash to escape the space? Are escape characters supported?

JMX metrics malformed by exception logged to the file

Running the collector, the following stack trace was embedded within the json output, causing it to be malformed for parsing:

getting attribute IdealConsistencyLevel of org.apache.cassandra.db:type=StorageProxy threw an exceptionjavax.management.RuntimeMBeanException: java.lang.NullPointerException
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651)
at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1445)
at javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
at javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:639)
at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:357)
at sun.rmi.transport.Transport$1.run(Transport.java:200)
at sun.rmi.transport.Transport$1.run(Transport.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:573)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:834)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:688)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:687)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:303)
at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:279)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:164)
at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
at javax.management.remote.rmi.RMIConnectionImpl_Stub.getAttribute(Unknown Source)
at javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:903)
at org.gridkit.jvmtool.cmd.MxDumpCmd$MxDump.writeAttribute(MxDumpCmd.java:145)
at org.gridkit.jvmtool.cmd.MxDumpCmd$MxDump.listBeans(MxDumpCmd.java:124)
at org.gridkit.jvmtool.cmd.MxDumpCmd$MxDump.run(MxDumpCmd.java:89)
at org.apache.cassandra.tools.nodetool.Sjk$Wrapper.run(Sjk.java:183)
at org.apache.cassandra.tools.nodetool.Sjk.execute(Sjk.java:70)
at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:314)
at org.apache.cassandra.tools.nodetool.Sjk.run(Sjk.java:57)
at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:63)
Caused by: java.lang.NullPointerException
at org.apache.cassandra.service.StorageProxy.getIdealConsistencyLevel(StorageProxy.java:2516)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:72)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:276)
at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
at com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1445)
at javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
at javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:639)
at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:357)
at sun.rmi.transport.Transport$1.run(Transport.java:200)
at sun.rmi.transport.Transport$1.run(Transport.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:573)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:834)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:688)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:687)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

add a dseConfHome variable

Collector currently hard-codes to look for conf/dse files in /etc/default/dse, but this can vary by environment.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.