Git Product home page Git Product logo

couchbase-shell's Introduction

Couchbase Shell - Shell Yeah!

CI

Couchbase Shell (cbsh) is a modern, productive and fun shell for Couchbase Server and Cloud.

Note that while the project is maintained by Couchbase, it is not covered under the EE support contract. We are providing community support through this bug tracker.

The documentation is available here.

Quickstart

First, download the archive for your operating system.

You do not need any extra dependencies to run cbsh, it comes "batteries included".

macOS Users: You will need to grant the binary permissions through Security & Privacy settings the first time you run it.

After extracting the archive, run the cbsh binary in your terminal.

❯ ./cbsh --version
The Couchbase Shell 0.75.1

Basic Usage

Once the binary is available, you can connect to a cluster on the fly and run a simple command to list the (user-visible) buckets.

❯ ./cbsh --connstr 127.0.0.1 -u username -p
Password:
👤 username 🏠 default in 🗄 <not set>
> buckets
───┬─────────┬───────────────┬───────────┬──────────┬──────────────────────┬───────────┬───────────────┬────────┬───────
 # │ cluster │     name      │   type    │ replicas │ min_durability_level │ ram_quota │ flush_enabled │ status │ cloud
───┼─────────┼───────────────┼───────────┼──────────┼──────────────────────┼───────────┼───────────────┼────────┼───────
 0 │ default │ beer-sample   │ couchbase │        1 │ none                 │  209.7 MB │ false         │        │ false
 1 │ default │ default       │ couchbase │        1 │ none                 │  104.9 MB │ true          │        │ false
 2 │ default │ targetBucket  │ couchbase │        0 │ none                 │  104.9 MB │ true          │        │ false
 3 │ default │ travel-sample │ couchbase │        1 │ none                 │  209.7 MB │ false         │        │ false
───┴─────────┴───────────────┴───────────┴──────────┴──────────────────────┴───────────┴───────────────┴────────┴───────

While passing in command-line arguments is fine if you want to connect quickly, using the dotfile ~/.cbsh/config for configuration is much more convenient. Here is a simple config which connects to a cluster running on localhost:

version = 1

[[cluster]]
identifier = "my-local-cb-node"
hostnames = ["127.0.0.1"]
default-bucket = "travel-sample"
username = "Administrator"
password = "password"

After the config is in place, you can run ./cbsh without any arguments and it will connect to that cluster after start automatically.

The downloaded archive contains an example directory which also contains sample configuration files for more information. Also, please see the docs for full guidance, including information about how to work with multiple clusters at the same time.

cbsh commands

On top of nushell built-in commands, the following couchbase commands are available:

  • analytics <statement> - Perform an analytics query
  • analytics dataverses - List all dataverses
  • analytics datasets - List all datasets
  • analytics indexes - List all analytics indexes
  • analytics links - List all analytics links
  • analytics buckets - List all analytics buckets
  • analytics pending-mutations - List pending mutations
  • buckets - Fetches buckets through the HTTP API
  • buckets config - Shows the bucket config (low level)
  • buckets create - Creates a bucket
  • buckets drop - Drops buckets through the HTTP API
  • buckets flush - Flushes buckets through the HTTP API
  • buckets get - Fetches a bucket through the HTTP API
  • buckets load-sample - Load a sample bucket
  • buckets update - Updates a bucket
  • cb-env - lists the currently active bucket, collection, etc.
  • cb-env bucket - Sets the active bucket based on its name
  • cb-env capella-organization - Sets the active Capella organization based on its identifier
  • cb-env cloud - Sets the active cloud based on its identifier
  • cb-env cluster - Sets the active cluster based on its identifier
  • cb-env collection - Sets the active collection based on its name
  • cb-env managed - Lists all clusters currently managed by couchbase shell
  • cb-env project - Sets the active cloud project based on its name
  • cb-env scope - Sets the active scope based on its name
  • cb-env timeouts - Sets the default timeouts
  • clouds - Lists all clusters on the active Capella organisation
  • clusters- Lists all clusters on the active Capella organisation
  • clusters create - Creates a new cluster against the active Capella organization
  • clusters drop - Deletes a cluster from the active Capella organization
  • clusters get - Gets a cluster from the active Capella organization
  • clusters health - Performs health checks on the target cluster(s)
  • clusters register - Registers a cluster for use with the shell
  • clusters unregister - Registers a cluster for use with the shell
  • collections - Fetches collections through the HTTP API
  • collections create - Creates collections through the HTTP API
  • collections drop - Removes a collection
  • doc get - Perform a KV get operation
  • doc insert - Perform a KV insert operation
  • doc remove - Removes a KV document
  • doc replace - Perform a KV replace operation
  • doc upsert - Perform a KV upsert operation
  • fake - Generate fake/mock data
  • help - Display help information about commands
  • nodes - List all nodes in the active cluster
  • ping - Ping available services in the cluster
  • projects - List all projects (cloud)
  • projects create - Create a new project (cloud)
  • projects drop - Remove a project (cloud)
  • query <statement> - Perform a N1QL query
  • query indexes - list query indexes
  • query advise - Ask the query advisor
  • use - Change the active bucket or cluster on the fly
  • scopes - Fetches scopes through the HTTP API
  • scopes create - Creates scopes through the HTTP API
  • scopes drop - Removes a scope
  • search - Runs a query against a search index
  • transations list-atrs - List all active transaction records (requires an index - create index id3 on travel-sample(meta().id, meta().xattrs.attempts))
  • tutorial - Runs you through a tutorial of both nushell and cbshell
  • users - List all users
  • users roles - List roles available on the cluster
  • users get - Show a specific user
  • users upsert - Create a new user or replace one
  • version - Shows the version of the shell
  • whoami - Shows roles and domain for the connected user

Building From Source

If you want to build from source, make sure you have a modern rust version and cargo installed (ideally through the rustup toolchain).

After that, you can build and/or run through cargo build / cargo run. By default it will build in debug mode, so if you want to build a binary and test the performance, make sure to include --release.

Installing as a binary through cargo

If you just want to use it and don't want to bother compiling all the time, you can use cargo install --path . to install it into your cargo bin path (run from the checked out source directory).

❯ cargo install --path .
  Installing couchbase-shell v0.75.1 (/Users/michaelnitschinger/couchbase/code/rust/couchbase-shell)
    Updating crates.io index
  Downloaded plist v1.2.1
  Downloaded onig v6.3.0
  Downloaded string_cache v0.8.2
  Downloaded num-bigint v0.4.2
  ...

Grab a quick coffee or tea since this will take some time to compile.

License

Couchbase Shell is licensed under the Apache 2.0 License.

Couchbase Shell is made possible through open source components as listed with their licenses in NOTICES.

Usage of Couchbase Shell is subject to the Couchbase Inc. License Agreement

couchbase-shell's People

Contributors

av25242 avatar brantburnett avatar chvck avatar daschl avatar dependabot[bot] avatar ingenthr avatar rekhads avatar steveyen avatar westwooo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

couchbase-shell's Issues

Rename "kv" commands to "data" (or possible "document")

My initial reaction was:
-it feels a bit like "inside baseball" that Steve keeps harping on 🙂
-It creates a bit of confusion/mismatch with the "Data" service
-it also pigeon-holes the operations into "key-value" rather than also being "document"

Doc command --bucket command fails with memcached buckets

I'm using a couchbase bucket but also have a memcached bucket. I want to run a command like doc insert --bucket memd where memd is my memcached bucket. If I do this I see something like:

> doc insert test {"test":"test"} --bucket memd
👤Administrator at 🏠local in 🗄 default
> bu ERROR couchbase::io::lcb::callbacks > <NOHOST:NOPORT> (CTX=0x0,) Could not get confi👤Administrator at 🏠local in 🗄 default
> buckets
 ERROR couchbase::io::lcb::callbacks > <NOHOST:NOPORT> (CTX=0x0,) Could not get configuration: LCB_ERR_DOCUMENT_NOT_FOUND (301)
^C ERROR couchbase::io::lcb::callbacks > <NOHOST:NOPORT> (CTX=0x0,) Could not get configuration: LCB_ERR_DOCUMENT_NOT_FOUND (301)
^C^C^C ERROR couchbase::io::lcb::callbacks > <NOHOST:NOPORT> (CTX=0x0,) Could not get configuration: LCB_ERR_DOCUMENT_NOT_FOUND (301)
 ERROR couchbase::io::lcb::callbacks > <NOHOST:NOPORT> (CTX=0x0,) Could not get configuration: LCB_ERR_DOCUMENT_NOT_FOUND (301)
 ERROR couchbase::io::lcb::callbacks > <NOHOST:NOPORT> (CTX=0x0,) Could not get configuration: LCB_ERR_DOCUMENT_NOT_FOUND (301)
 ERROR couchbase::io::lcb::callbacks > <NOHOST:NOPORT> (CTX=0x0,) Could not get configuration: LCB_ERR_DOCUMENT_NOT_FOUND (301)
 ERROR couchbase::io::lcb::callbacks > Failed to bootstrap client=0x7f82004046f0. Error=LCB_ERR_TIMEOUT (201) (Last=LCB_ERR_DOCUMENT_NOT_FOUND (301)), Message=Failed to bootstrap in time

This is the same across the doc commands.

artifacts could have a better name

Right now, when I look at artifacts produced by actions, they have names like "MacOS.zip". Maybe "cbshell-$ver-MacOS.zip" would be better?

emailable rc files

Can't do this currently because the credentials are also in there. Need a merge of some files that have creds and config.

May want to consider having it versioned somehow.

cbsh 0.4.0 panics with ~/.cbsh/config using same value for bucket name and username

cbsh (and couchbase) seem pretty cool. I'm new to both (but have followed nushell sporadically).

This issue might be a dupe. I've seen a few previous ones related to buckets, but they seem like variants of mine so I thought I'd capture this. If it is a dupe, mark it as such.

When I state a "starting bucket" at the command line, it's not respected:

$ cbsh --bucket mcarifio
👤 mcarifio at 🏠 default
> use
───┬─────────┬───────────
 # │ cluster │  bucket   
───┼─────────┼───────────
 0 │ default │ <not set> 
───┴─────────┴───────────
👤 mcarifio at 🏠 default
> use bucket mcarifio
───┬──────────
 # │  bucket  
───┼──────────
 0 │ mcarifio 
───┴──────────
👤 mcarifio at 🏠 default in 🗄  mcarifio
> use
───┬─────────┬──────────
 # │ cluster │  bucket  
───┼─────────┼──────────
 0 │ default │ mcarifio 
───┴─────────┴──────────

If I state the default-bucket in ~/.cbsh/config:

version = 1
[clusters.default]
hostnames = ["127.0.0.1"]
# panics cbsh if set?
default-bucket = "mcarifio"
username = "mcarifio"

and run:

cbsh
👤 mcarifio at 🏠 default in 🗄  mcarifio
>  ERROR couchbase::io::lcb::callbacks > The instance has been associated with the bucket already, sorry
thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: 203', /home/runner/.cargo/git/checkouts/couchbase-rs-9bae2babc4d89f61/3a0f8d9/couchbase/src/io/lcb/mod.rs:111:54
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

👤 mcarifio at 🏠 default in 🗄  mcarifio
> use
───┬─────────┬──────────
 # │ cluster │  bucket  
───┼─────────┼──────────
 0 │ default │ mcarifio 
───┴─────────┴──────────

> nodes
thread 'main' panicked at 'Could not send request: "SendError(..)"', /home/runner/.cargo/git/checkouts/couchbase-rs-9bae2babc4d89f61/3a0f8d9/couchbase/src/io/lcb/mod.rs:54:14
thread 'main' panicked at 'Failure while shutting down!: "SendError(..)"', /home/runner/.cargo/git/checkouts/couchbase-rs-9bae2babc4d89f61/3a0f8d9/couchbase/src/io/lcb/mod.rs:74:14
stack backtrace:

For now, the workaround is just to use bucket <bucket> in the shell itself. I'm fine with that.

$ cbsh  # no bucket
👤 mcarifio at 🏠 default
> nodes
───┬─────────┬────────────────┬─────────┬──────────────────────────┬──────────────────────┬──────────────────────────┬──────────────┬─────────────
 # │ cluster │    hostname    │ status  │         services         │       version        │            os            │ memory_total │ memory_free 
───┼─────────┼────────────────┼─────────┼──────────────────────────┼──────────────────────┼──────────────────────────┼──────────────┼─────────────
 0 │ default │ 127.0.0.1:8091 │ healthy │ search,indexing,kv,query │ 6.6.0-7909-community │ x86_64-unknown-linux-gnu │      67.4 GB │     38.9 GB 
───┴─────────┴────────────────┴─────────┴──────────────────────────┴──────────────────────┴──────────────────────────┴──────────────┴─────────────
👤 mcarifio at 🏠 default
> use bucket mcarifio
───┬──────────
 # │  bucket  
───┼──────────
 0 │ mcarifio 
───┴──────────
👤 mcarifio at 🏠 default in 🗄  mcarifio
> nodes
───┬─────────┬────────────────┬─────────┬──────────────────────────┬──────────────────────┬──────────────────────────┬──────────────┬─────────────
 # │ cluster │    hostname    │ status  │         services         │       version        │            os            │ memory_total │ memory_free 
───┼─────────┼────────────────┼─────────┼──────────────────────────┼──────────────────────┼──────────────────────────┼──────────────┼─────────────
 0 │ default │ 127.0.0.1:8091 │ healthy │ search,indexing,kv,query │ 6.6.0-7909-community │ x86_64-unknown-linux-gnu │      67.4 GB │     38.9 GB 
───┴─────────┴────────────────┴─────────┴──────────────────────────┴──────────────────────┴───────

I don't know enough couch, rust, nushell or cbsh to hazard any hypothesis about why.

Ty for cbsh. I'll continue to explore it.

-p should not allow password

Since command line arguments are visible on a UNIX-like (probably Windows too), system to other users, -p should not take password but should be "-p -" meaning from stdin before starting or should be -p .

Loop support with fake and tera

based on tera, I tried this…

👤ingenthr at 🏠local in 🗄 default
> fake --template order-wip.tera --num-rows 5 | get content | get order_items
error: Error: expected `,` or `]` at line 11 column 26

👤ingenthr at 🏠local in 🗄 default
> open order-wip.tera
{
    "id": "{{ uuid() }}",
    "content": {
        "order_by": "{{ userName() }}",
        "address": "{{ numberWithFormat(format='^####') ~ ' ' ~ 
                       streetName() ~ '\n' ~
                       cityName() ~ ', ' ~
                       stateAbbr() ~ ' ' ~
                       postCode()  }}",
        "order_items": [
                         {% for i in range(end=5) %}
                         { "qty": "{{ numberWithFormat(format='^#')}}",
                           "item": "{{ words(num=3) }}" }
                         {% endfor %}
                       ],
        "type": "order"
    }
}

But it does not work. There does not seem to be looping support?

Cloud shell

  • Users shouldn't need to edit connstrings in the config, should be able to highlevel edit whether or not to use TLS and the like.
  • Identify the right timeouts for good UX, default to those
  • Handle CA certs better via configuration

Expose kv stats

Add a kv stats subcommand which calls into libcouchbase and runs the stats command (might be able to take additional data to filter down?), but this needs couchbase-rs support first

give a sane view of local couchbase log files?

If I'm running couchbase server locally and using cbsh... would love to have a cbsh command that...

  • opens the local couchbase server log files

  • and merges them by timestamp

  • optionally like tail -f to follow what's going on

so that i can grep through the trail of evidence using cbsh.

  • and, fantasy -- cbsh can optionally apply to the ongoing log stream some library of well-tended diagnostic patterns or recipes to tell me why the #@#$*&^ my app is getting that inscrutable error and why that API ain't working right for me.

  • and, fantasy x2 -- tell me some hints of what I probably did wrong as an app developer with the API.

Improve doc error messages

I'm doing a doc insert on a document and I can see that it fails but I have no idea why. The actual reason is that it already exists but there's no indication of that.

> doc insert testy {"test":"test"} --expiry 10
───┬───────────┬─────────┬────────
 # │ processed │ success │ failed
───┼───────────┼─────────┼────────
 0 │         1 │       0 │      1
───┴───────────┴─────────┴────────

panic in config

I was experimenting with different configurations. This one…

version = 1

[clusters.local]
hostnames = ["127.0.0.1"]
default-bucket = "default"

[clusters.mycloud]
hostnames = ["0cb55b31-d7ac-4ba3-8299-4d2cbc75f2df.dp.cloud.couchbase.com"]
default-bucket = "travel-sample"
cert-path = "/Users/ingenthr/.cbsh/ingenthr-test-ingenthr-cluster-20200918-root-certificate.pem"

[clusters.hometest6_6]
hostnames = ["centos7lx-1.home.ingenthron.org"]
default-bucket = "travel-sample"

#[clusters.home-test-CC]
#hostnames = ["centos7lx-1.home.ingenthron.org"]
#default-bucket = "travel-sample"

#[clusters.home-test-6_5]
#hostnames = ["centos7lx-1.home.ingenthron.org"]
#default-bucket = "travel-sample"

… leads to a panic.

ingenthr-mbp:Downloads ingenthr$ RUST_BACKTRACE=1 ./cbsh
thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', src/config.rs:127:44
stack backtrace:
   0: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
   1: core::fmt::write
   2: std::io::Write::write_fmt
   3: std::panicking::default_hook::{{closure}}
   4: std::panicking::default_hook
   5: std::panicking::rust_panic_with_hook
   6: rust_begin_unwind
   7: core::panicking::panic_fmt
   8: core::panicking::panic
   9: core::option::Option<T>::unwrap
  10: cbsh::config::ClusterConfig::username
  11: cbsh::main::{{closure}}
  12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  13: tokio::runtime::enter::Enter::block_on::{{closure}}
  14: tokio::coop::with_budget::{{closure}}
  15: std::thread::local::LocalKey<T>::try_with
  16: std::thread::local::LocalKey<T>::with
  17: tokio::runtime::enter::Enter::block_on
  18: tokio::runtime::thread_pool::ThreadPool::block_on
  19: tokio::runtime::Runtime::block_on::{{closure}}
  20: tokio::runtime::context::enter
  21: tokio::runtime::handle::Handle::enter
  22: tokio::runtime::Runtime::block_on
  23: cbsh::main
  24: std::rt::lang_start::{{closure}}
  25: std::rt::lang_start_internal
  26: std::rt::lang_start
  27: main
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
ingenthr-mbp:Downloads ingenthr$ RUST_BACKTRACE=1 ./cbsh --version
The Couchbase Shell 1.0.0-dev

The version is whatever I could download yesterday. Appears to be cbsh-4ca59aa5-mac-x86_64.zip in the macOS.zip :)

Doc get --flatten only flattens top level

When using doc get <id> --flatten only the top level of the content is flatted. If one on the column contains a row then this data is not flattened into the top level.

buckets listing does not work with couchbases:// scheme

It looks like "buckets" is using HTTP, and rather than using the base URI from the config, it's trying to figure it out on it's own. Probably better to use the base URI.

This means it won't work with TLS at the moment.

❯ clusters 
───┬────────┬────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────
 # │ active │ identifier │ connstr                                                                                          │ username 
───┼────────┼────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────
 0 │ No     │ mycloud    │ couchbases://cb.11dd1906-8075-4764-b2ed-                                                         │ matt 
   │        │            │ 61b567af5ab6.dp.cloud.couchbase.com?certpath=/Users/ingenthr/src/couchbase-shell/mattwashere.pem │  
 1 │ Yes    │ cluster1   │ couchbase://centos7lx-1.home.ingenthron.org                                                      │ Administrator 
 2 │ No     │ cluster2   │ couchbase://localhost                                                                            │ Administrator 
───┴────────┴────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────
couchbase-shell on 📙 master is 📦 v0.0.2 via 🦀 v1.41.0 
❯ clusters --activate mycloud
───┬────────┬────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────
 # │ active │ identifier │ connstr                                                                                          │ username 
───┼────────┼────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────
 0 │ Yes    │ mycloud    │ couchbases://cb.11dd1906-8075-4764-b2ed-                                                         │ matt 
   │        │            │ 61b567af5ab6.dp.cloud.couchbase.com?certpath=/Users/ingenthr/src/couchbase-shell/mattwashere.pem │  
 1 │ No     │ cluster1   │ couchbase://centos7lx-1.home.ingenthron.org                                                      │ Administrator 
 2 │ No     │ cluster2   │ couchbase://localhost                                                                            │ Administrator 
───┴────────┴────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────
couchbase-shell on 📙 master is 📦 v0.0.2 via 🦀 v1.41.0 
❯ buckets
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: reqwest::Error { kind: Request, url: "http://couchbases//cb.11dd1906-8075-4764-b2ed-61b567af5ab6.dp.cloud.couchbase.com?certpath=/Users/ingenthr/src/couchbase-shell/mattwashere.pem:8091/pools/default/buckets", source: hyper::Error(Connect, ConnectError("dns error", Custom { kind: Other, error: "failed to lookup address information: nodename nor servname provided, or not known" })) }', src/libcore/result.rs:1188:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
ingenthr-mbp:couchbase-shell ingenthr$ echo $LCB_LOGLEVEL
5
ingenthr-mbp:couchbase-shell ingenthr$ export LCB_LOGLEVEL=5 && target/debug/cbsh
couchbase-shell on 📙 master is 📦 v0.0.2 via 🦀 v1.41.0 
❯ clusters
───┬────────┬────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────
 # │ active │ identifier │ connstr                                                                                          │ username 
───┼────────┼────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────
 0 │ Yes    │ cluster1   │ couchbase://centos7lx-1.home.ingenthron.org                                                      │ Administrator 
 1 │ No     │ cluster2   │ couchbase://localhost                                                                            │ Administrator 
 2 │ No     │ mycloud    │ couchbases://cb.11dd1906-8075-4764-b2ed-                                                         │ matt 
   │        │            │ 61b567af5ab6.dp.cloud.couchbase.com?certpath=/Users/ingenthr/src/couchbase-shell/mattwashere.pem │  
───┴────────┴────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────
couchbase-shell on 📙 master is 📦 v0.0.2 via 🦀 v1.41.0 
❯ buckets
───┬───────────────┬─────────
 # │ name          │ type 
───┼───────────────┼─────────
 0 │ beer-sample   │ membase 
 1 │ travel-sample │ membase 
───┴───────────────┴─────────
couchbase-shell on 📙 master is 📦 v0.0.2 via 🦀 v1.41.0 
❯ kv-get foo
error: Error: Could not auto-select a bucket - please use --bucket instead
couchbase-shell on 📙 master is 📦 v0.0.2 via 🦀 v1.41.0 
❯ kv-get --bucket beer-sample foo 
couchbase-shell on 📙 master is 📦 v0.0.2 via 🦀 v1.41.0 
❯ 

Add support for scopes/collections

  • ├── analytics.rs (possible via statement itself though I think)

  • ├── doc_get.rs

  • ├── doc_insert.rs

  • ├── doc_remove.rs

  • ├── doc_replace.rs

  • ├── doc_upsert.rs

  • ├── query.rs

  • ├── collections management

Create site for project

Should have

  • Home page, describes what it is
  • Documentation, link to HTML generated from asciidoc
  • Link to recipes on wiki

Add either a setting or command to get query metrics

At the moment, when one issues a query, there are no metrics for how long the query took to execute or how many rows it returns. Since this may be in the way of the pipeline, perhaps it'd be fine to stash the metrics or out-of-rows metadata somewhere for fetching with another command.

query "SELECT foo FROM bar LIMIT 1"
  (…pipeline renders table)
query lastmetrics
  (…pipeline renders table)

There might be better solutions too, like a setting for getting metrics in the pipeline too which I think will generate another output. Or a flag and a setting.

Commands supporting --clusters flag regex can match extra clusters when flag not used

If I have a config which looks like:

[clusters.local]
hostnames = ["localhost"]
default-bucket = "default"
# The following can be part of the config or credentials
# cert-path = "/Users/charlesdixon/dev/couchbase-shell/.cbsh/certs/ca.pem"
username = "Administrator"
password = "password"
data-timeout = "2500ms"
connect-timeout = "7500ms"
query-timeout = "75s"

[clusters.notlocalbutisreally]
hostnames = ["10.112.210.101"]
default-bucket = "default"
# The following can be part of the config or credentials
# cert-path = "/Users/charlesdixon/dev/couchbase-shell/.cbsh/certs/ca.pem"
username = "Administrator"
password = "password"

Take for example the buckets flag:

👤 Administrator at 🏠 local in 🗄  default
> use
───┬─────────┬─────────
 # │ cluster │ bucket
───┼─────────┼─────────
 0 │ local   │ default
───┴─────────┴─────────
👤 Administrator at 🏠 local in 🗄  default
> buckets
───┬───────────────┬───────────────┬───────────┬──────────┬───────────────┬─────────────
 # │    cluster    │     name      │   type    │ replicas │ quota_per_nod │ quota_total
   │               │               │           │          │       e       │
───┼───────────────┼───────────────┼───────────┼──────────┼───────────────┼─────────────
 0 │ notlocalbutis │ default       │ couchbase │        1 │      104.9 MB │    104.9 MB
   │ really        │               │           │          │               │
 1 │ notlocalbutis │ secBucket     │ ephemeral │        1 │      432.0 MB │    432.0 MB
   │ really        │               │           │          │               │
 2 │ local         │ barry         │ couchbase │        1 │      222.3 MB │    222.3 MB
 3 │ local         │ beer-sample   │ couchbase │        1 │      104.9 MB │    104.9 MB
 4 │ local         │ default       │ couchbase │        1 │      104.9 MB │    104.9 MB
 5 │ local         │ travel-sample │ couchbase │        1 │      104.9 MB │    104.9 MB
───┴───────────────┴───────────────┴───────────┴──────────┴───────────────┴─────────────
👤 Administrator at 🏠 local in 🗄  default
> buckets --clusters local
───┬───────────────┬───────────────┬───────────┬──────────┬───────────────┬─────────────
 # │    cluster    │     name      │   type    │ replicas │ quota_per_nod │ quota_total
   │               │               │           │          │       e       │
───┼───────────────┼───────────────┼───────────┼──────────┼───────────────┼─────────────
 0 │ notlocalbutis │ default       │ couchbase │        1 │      104.9 MB │    104.9 MB
   │ really        │               │           │          │               │
 1 │ notlocalbutis │ secBucket     │ ephemeral │        1 │      432.0 MB │    432.0 MB
   │ really        │               │           │          │               │
 2 │ local         │ barry         │ couchbase │        1 │      222.3 MB │    222.3 MB
 3 │ local         │ beer-sample   │ couchbase │        1 │      104.9 MB │    104.9 MB
 4 │ local         │ default       │ couchbase │        1 │      104.9 MB │    104.9 MB
 5 │ local         │ travel-sample │ couchbase │        1 │      104.9 MB │    104.9 MB
───┴───────────────┴───────────────┴───────────┴──────────┴───────────────┴─────────────

Even when the --clusters flag isn't in use it's doing a regex match against "local".

synchronize log output

a log error came up from lcb at the same time cbsh was writing to stdout. This leads to confusing interleaved text.

image

Since lcb has a way to get logging output, that should probbably be surfaced through rust/cbsh and displayed appropriately inline to be more user friendly.

improve support for import from files

Currently, there is a recipe for importing json files in a particular format. That works well, but unfortunately the format isn't always what one would like it to be. Maybe this can be mixed in with something like jq?

One thing I tried, for example, was unzipping the travel-sample database and try to reimport it. The format didn't quite match. I tried a few things listing all of the .json files and piping to reformat with builtins, but it seemed to be trying to do all of this in memory, so I killed it.

Scenario:
Given a set of files in a directory that are in JSON format, as a user I want to be able to import them. If they are not in the right format, I want to be able to massage them easily into the right format. For example, the file is likely what I really want to go into the 'content' field that kv-upsert expects, but I may want to extract a field from that to be the ID. Or, I may want the ID to be a concatination of two or more fields. Or, I may want the ID to be a sequence, possibly concatinated on a field in the file.

This may be do-able with nushell built ins, but it is complicated enough that it wasn't obvious in a few minutes.

One possible implementation: allow for a file that defines reformatting rules that are passed in to kv-upsert. Or introduce another command like kv-import.

Another solution which we could play with is jq streaming edits, which seems to be a thing, but not something I'm familiar with.

Add more clusters health features

  • make the checks have an identifier
  • add a list-checks command
  • add a description column
  • allow to disable a check with --disable on the args list

after that:

  • add more commands!

Add eventing management functions

From shell, be able to list eventing functions. Then add and remove.

To add this, we need underlying functionality in libcouchbase and in couchbase-rs.

unable to run in windows

Hi,

I just downloaded cbsh-0.4.0-windows and extracted it. When I try to run it, it throws me

image

image

Am I missing any steps?

Show user friendly error message

After installing cbshell and with no instance of CB Server running, we see the following error

 ERROR couchbase::io::lcb::callbacks > <localhost:11210> (SOCK=1808f587c2061ced) Failed to establish connection: LCB_ERR_NETWORK (1048), os errno=61
 ERROR couchbase::io::lcb::callbacks > <NOHOST:NOPORT> (CTX=0x0,) Could not get configuration: LCB_ERR_NETWORK (1048)
 ERROR couchbase::io::lcb::callbacks > <localhost:8091> (SOCK=300b98af746435ff) Failed to establish connection: LCB_ERR_NETWORK (1048), os errno=61
 ERROR couchbase::io::lcb::callbacks > Connection to REST API failed with LCB_ERR_NETWORK (1048) (os errno = 61)
 ERROR couchbase::io::lcb::callbacks > Failed to bootstrap client=0x7ff6bf604230. Error=LCB_ERR_NETWORK (1048), Message=No more bootstrap providers remain
thread ‘<unnamed>’ panicked at ‘called `Result::unwrap()` on an `Err` value: 1048’, /Users/arun.vijayraghavan/.cargo/git/checkouts/couchbase-rs-9bae2babc4d89f61/79730a4/couchbase/src/io/lcb/mod.rs:88:20
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

it would be nice if the error message displayed is more verbose something like "could not connect to localhost"

Change help msg first line?

When using the help command, it says...

> help
Welcome to Nushell.

Perhaps it should instead say "Welcome to couchbase.sh (powered by Nushell)" or equivalent to be less confusing?

cannot express config with alternate ports

Scenario:

I'm trying to run both Couchbase 6.5 and 6.6 as single-node docker containers across the network. Obviously, only one can use the canonical port of 11210. In my current case, I have 6.5 with docker port mapping 11210 -> 11310 and 6.6 using 11210 -> 11210.

What I found with the following config was that even though I had a port number in there, it would ignore the port number and use port 11210 regardless. This had the effect of doc upsert … to the wrong cluster.

Observed behavior:
No way to provide a port number.

Expected behavior:
There is a way to provide a port number.

Recommendations:
Earlier we allowed a connstr. I think it would be good to do that again somehow since it allows simple expression of a lot of detail. Right now, one cannot.

version = 1

[clusters.local]
hostnames = ["127.0.0.1"]
default-bucket = "default"

#[clusters."cbcloud.test"]
#hostnames = ["0cb55b31-d7ac-4ba3-8299-4d2cbc75f2df.dp.cloud.couchbase.com"]
#default-bucket = "travel-sample"
#cert-path = "/Users/ingenthr/.cbsh/ingenthr-test-ingenthr-cluster-20200918-root-certificate.pem"

[clusters."test-6.6"]
hostnames = ["dnix.home.ingenthron.org"]
default-bucket = "default"

#[clusters."test-CC"]
#hostnames = ["centos7lx-1.home.ingenthron.org"]
#default-bucket = "travel-sample"

[clusters."test-6.5"]
hostnames = ["dnix.home.ingenthron.org:11310"]
default-bucket = "default"

Aside: it may also be handy to know via some display if the hostname was resolved as DNS SRV or DNS A/AAAA.

kv insert creates doc with string instead of object

kv insert is inserting the JSON I give it as a string. Example:

kv insert 999 {"foo":"bar"}

this results in a document with a string of "{"foo":"bar"}"

cbshellkvinsert

It's possible i'm using it wrong (maybe I need to specify a format, in which case kv insert -h should say how to do that), but on the other hand, kv upsert seems to understand that I'm giving it JSON.

kv upsert 998 {"foo":"bar"}

cbshellkvupsert

Improve documentation on the fake command

I found myself wanting a city, which if you search on tera will possibly turn up city() but in our case it depends on fake which then has functions registered. It was really hard to figure this out.

Should figure out where to document what is in the functions registered in this repo, or point to authoritative documentation if we're taking it all.

cbsh shows SDK code snippet equivalents?

When I run a command in cbsh... can I run another following command that shows me how to what cbsh just did in my favorite language and couchbase SDK?

Or, in you're a N1QL person, perhaps... :-)

EXPLAIN AS SDK EXAMPLE cbsh command here

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.