spruceid / didkit Goto Github PK
View Code? Open in Web Editor NEWA cross-platform toolkit for decentralized identity.
Home Page: https://www.spruceid.dev/didkit/didkit
License: Apache License 2.0
A cross-platform toolkit for decentralized identity.
Home Page: https://www.spruceid.dev/didkit/didkit
License: Apache License 2.0
Flutter plugins can target Web, as well as Android and iOS:
https://flutter.dev/docs/development/packages-and-plugins/developing-packages#web
Credible has an interface for DIDKit that abstracts over DIDKit's native Dart package and the JS/WASM library:
https://github.com/spruceid/credible/blob/main/lib/app/interop/didkit/didkit.dart
Could this functionality be merged "upstream" into DIDKit's Flutter/Dart plugin? It could be useful for other applications.
Commits from #49 were rebased onto #50, but this breaks correspondance between the v0.1.0
release commit, changelog, and the initial release blog post. Also, the commit merged in #50 does not have author/committer info (the commit author is "ec2-user"). I propose to rewrite the last three commits on main
, rebasing rebase #50 onto the original #49. @wyc would this be okay? @theosirian can I amend your commit from #50 to show authorship as in your other commits?
To create a did from a key pair in the demos from the documentation the following command is used:
didkit key-to-did-key -k issuer_key.jwk
How can a did:web did be created?
I tried using the following which resulted in errors:
didkit key-to-did-web -k issuer_key.jwk
didkit key-to-did web -k issuer_key.jwk
Spruce VC-HTTP-API Test Report (demo server): https://w3c-ccg.github.io/vc-http-api/test-suite/spruce/
ssi
: spruceid/ssi#118ssi
: spruceid/ssi#135
verificationMethod
option in issueCredential
: #184didkit-http
: #102did:web:vc.transmute.world
(issuer of verifiableCredentials/case-5.json)
publicKey
array: spruceid/ssi#128Ed25519Signature2018
: spruceid/ssi#121didkit-http
instanceAs per #36
We should also consider what the HTTP changes could look like, perhaps borrowing the RESTful interfaces from https://github.com/decentralized-identity/universal-resolver to ensure compatibility.
For distributing DIDKit in applications and for publishing it to non-native package managers, we should include a list of the dependency licenses and copyright notices. License info of cargo dependencies may be gathered e.g. using cargo-about, cargo-license, or cargo-lichking, or by parsing the output of cargo tree
or cargo metadata
. This could be added to a text file that would be included in the didkit npm package and flutter/dart plugin. Also, NOTICE files from dependencies using Apache License, Version 2.0 should be collected and added as required by the Apache license, maybe semi-manually since there are not many of these. Also, we should probably include licenses for Rust core (libstd
, etc.), but is a little unclear how to do this currently: rust-lang/rust#67014
Publish didkit
v0.2.0
to crates.io
.
ssi v0.2.0
(and workspace crates) to crates.io
: spruceid/ssi#136
ssi
using registry versions alongside path dependencyv0.1.0
: #123didkit
ready to publish (cargo publish --dry-run
succeeds): #139
didkit
published to crates.io
: #144
didkit
using registry version alongside path dependencyneon-serde
fork) in didkit-node
, or skip publishing didkit-node
(maybe more important this one is published with npm
than to crates.io
?): skipping; didkit-node
and/or didkit-wasm
can go to a npm registry instead.crates.io
As per #62, a user was no longer able to follow the instructions to simply pull the repository and run cargo build
. It would be great to add this as a testing requirement in our CI. Not sure how this interacts with our push to crates.io--perhaps that service has ways to alleviate things too.
Is your feature request related to a problem? Please describe.
I’m desire a correlation-resistant endpoint.
Describe the solution you'd like
Support did:onion
and your other DIDs over a TorGap.
Describe alternatives you've considered
DID-BTCR is correlation resistant if you communicate with bitcoin-core over a TorGap. We have implemented Bitcoin Standup Scripts (Linux & VPS) and Gordian Server that do this.
The http-jni-docs
branch fails to build on MacOS Catalina with the following rustc
version:
% rustc --version
rustc 1.50.0-nightly (da3846948 2020-11-21)
git clone [email protected]:spruceid/ssi.git
git checkout http-jni-docs
cargo build
didkit % cargo build
warning: unused import: `Statement`
--> /Users/wayne/work/ssi/src/jsonld.rs:6:5
|
6 | Statement,
| ^^^^^^^^^
|
= note: `#[warn(unused_imports)]` on by default
warning: unused variable: `identifier`
--> /Users/wayne/work/ssi/src/jsonld.rs:69:9
|
69 | let identifier = match identifier {
| ^^^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_identifier`
|
= note: `#[warn(unused_variables)]` on by default
warning: unused variable: `node_map`
--> /Users/wayne/work/ssi/src/jsonld.rs:329:5
|
329 | node_map: &NodeMap,
| ^^^^^^^^ help: if this is intentional, prefix it with an underscore: `_node_map`
warning: unused variable: `dataset`
--> /Users/wayne/work/ssi/src/jsonld.rs:330:5
|
330 | dataset: &mut DataSet,
| ^^^^^^^ help: if this is intentional, prefix it with an underscore: `_dataset`
warning: unused variable: `options`
--> /Users/wayne/work/ssi/src/jsonld.rs:331:5
|
331 | options: Option<&JsonLdOptions>,
| ^^^^^^^ help: if this is intentional, prefix it with an underscore: `_options`
warning: field is never read: `produce_generalized_rdf`
--> /Users/wayne/work/ssi/src/jsonld.rs:23:5
|
23 | produce_generalized_rdf: Option<bool>,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: `#[warn(dead_code)]` on by default
warning: field is never read: `rdf_direction`
--> /Users/wayne/work/ssi/src/jsonld.rs:24:5
|
24 | rdf_direction: Option<RdfDirection>,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
warning: 7 warnings emitted
Compiling didkit_cbindings v0.1.0 (/Users/wayne/work/didkit/lib/cbindings)
Compiling didkit-http v0.0.1 (/Users/wayne/work/didkit/http)
error: future cannot be sent between threads safely
--> http/src/lib.rs:164:9
|
164 | / Box::pin(async move {
165 | | let body_reader = hyper::body::aggregate(req).await?.reader();
166 | | let issue_req: IssueCredentialRequest = match serde_json::from_reader(body_reader) {
167 | | Ok(reader) => reader,
... |
188 | | .map_err(|err| err.into())
189 | | })
| |__________^ future created by async block is not `Send`
|
= help: the trait `Send` is not implemented for `(dyn StdError + 'static)`
note: future is not `Send` as this value is used across an await
--> http/src/lib.rs:178:28
|
177 | Err(err) => {
| --- has type `didkit::Error` which is not `Send`
178 | return Self::response(StatusCode::INTERNAL_SERVER_ERROR, err.to_string())
| ____________________________^
179 | | .await;
| |______________________________^ await occurs here, with `err` maybe used later
180 | }
| - `err` is later dropped here
= note: required for the cast to the object type `dyn Future<Output = std::result::Result<Response<Body>, error::Error>> + Send`
error: future cannot be sent between threads safely
--> http/src/lib.rs:164:9
|
164 | / Box::pin(async move {
165 | | let body_reader = hyper::body::aggregate(req).await?.reader();
166 | | let issue_req: IssueCredentialRequest = match serde_json::from_reader(body_reader) {
167 | | Ok(reader) => reader,
... |
188 | | .map_err(|err| err.into())
189 | | })
| |__________^ future created by async block is not `Send`
|
= help: the trait `Sync` is not implemented for `(dyn StdError + 'static)`
note: future is not `Send` as this value is used across an await
--> http/src/lib.rs:178:28
|
178 | return Self::response(StatusCode::INTERNAL_SERVER_ERROR, err.to_string())
| ____________________________^
179 | | .await;
| |______________________________^ first, await occurs here, with `err` maybe used later...
note: `err` is later dropped here
--> http/src/lib.rs:179:31
|
178 | return Self::response(StatusCode::INTERNAL_SERVER_ERROR, err.to_string())
| --- has type `&didkit::Error` which is not `Send`
179 | .await;
| ^
help: consider moving this into a `let` binding to create a shorter lived borrow
--> http/src/lib.rs:178:78
|
178 | return Self::response(StatusCode::INTERNAL_SERVER_ERROR, err.to_string())
| ^^^^^^^^^^^^^^^
= note: required for the cast to the object type `dyn Future<Output = std::result::Result<Response<Body>, error::Error>> + Send`
error: future cannot be sent between threads safely
--> http/src/lib.rs:246:9
|
246 | / Box::pin(async move {
247 | | let body_reader = hyper::body::aggregate(req).await?.reader();
248 | | let issue_req: ProvePresentationRequest = match serde_json::from_reader(body_reader) {
249 | | Ok(reader) => reader,
... |
270 | | .map_err(|err| err.into())
271 | | })
| |__________^ future created by async block is not `Send`
|
= help: the trait `Send` is not implemented for `(dyn StdError + 'static)`
note: future is not `Send` as this value is used across an await
--> http/src/lib.rs:260:28
|
259 | Err(err) => {
| --- has type `didkit::Error` which is not `Send`
260 | return Self::response(StatusCode::INTERNAL_SERVER_ERROR, err.to_string())
| ____________________________^
261 | | .await;
| |______________________________^ await occurs here, with `err` maybe used later
262 | }
| - `err` is later dropped here
= note: required for the cast to the object type `dyn Future<Output = std::result::Result<Response<Body>, error::Error>> + Send`
error: future cannot be sent between threads safely
--> http/src/lib.rs:246:9
|
246 | / Box::pin(async move {
247 | | let body_reader = hyper::body::aggregate(req).await?.reader();
248 | | let issue_req: ProvePresentationRequest = match serde_json::from_reader(body_reader) {
249 | | Ok(reader) => reader,
... |
270 | | .map_err(|err| err.into())
271 | | })
| |__________^ future created by async block is not `Send`
|
= help: the trait `Sync` is not implemented for `(dyn StdError + 'static)`
note: future is not `Send` as this value is used across an await
--> http/src/lib.rs:260:28
|
260 | return Self::response(StatusCode::INTERNAL_SERVER_ERROR, err.to_string())
| ____________________________^
261 | | .await;
| |______________________________^ first, await occurs here, with `err` maybe used later...
note: `err` is later dropped here
--> http/src/lib.rs:261:31
|
260 | return Self::response(StatusCode::INTERNAL_SERVER_ERROR, err.to_string())
| --- has type `&didkit::Error` which is not `Send`
261 | .await;
| ^
help: consider moving this into a `let` binding to create a shorter lived borrow
--> http/src/lib.rs:260:78
|
260 | return Self::response(StatusCode::INTERNAL_SERVER_ERROR, err.to_string())
| ^^^^^^^^^^^^^^^
= note: required for the cast to the object type `dyn Future<Output = std::result::Result<Response<Body>, error::Error>> + Send`
error: aborting due to 4 previous errors
error: could not compile `didkit-http`
To learn more, run the command again with --verbose.
didkit % rustc --version
rustc 1.50.0-nightly (da3846948 2020-11-21)
Unfortunately, as we were just passing our tests, it seems that the spec has changed. Fortunately they don't look too drastic, only URL component ordering.
Adapt spruceid/ssi#34
Answer the following questions:
This would prevent a double request in the case of verificationMethod
not being passed, as per #36
As mentioned in: #6 (comment)
Test Suite added in w3c-ccg/vc-api#65
Since ssi
and its DID method crates are now on crates.io
, we could depend on them without needing to use relative paths and requiring the ssi
repo to be checked out alongside DIDKit. Should we do that, to simplify developing of DIDKit to not need ssi
to be separately cloned? Or would this make it unnecessarily harder to develop ssi and DIDKit in tandem?
As discussed in #139 (comment)
https://github.com/spruceid/didkit/runs/1842461977#step:6:437
The build completed earlier with hyper v0.14.2:
https://github.com/spruceid/didkit/runs/1842355682#step:6:434
I am unable to reproduce the build failure locally with nightly-2021-02-04.
Reported in hyperium/hyper#2421
We should be assuming that the user doesn't have anything but the base operating system, and we should recommend what packages, JDK versions, Java compiler, etc. are necessary to get going.
If we need more requirements for didkit itself within /lib
, then that's where the documentation should live.
Is it possible to update the main README with these instructions? Thanks!
Modularity of DID methods is brought up in #23.
We could add a DID Method Rust trait in SSI. Then implement it in crates for each DID method.
Trait methods could include DID Method Operations - Create (Register), Read/Verify (Resolve), Update, and Deactivate. It could also include a from_jwk
method, to take the place of the current JWK::to_did_key
and proposed JWK::to_did_tezos
.
There could perhaps be a fallback implementation for read/resolution using a DID Resolver HTTP(S) binding (e.g. Universal Resolver).
To load DID Method trait implementations into DIDKit, a somewhat simple option would be to depend on them in DIDKit using features / optional dependencies. We could also consider loading implementations at runtime or programattically.
Runtime loading could use dynamic libraries as plugins, like in this article: https://michael-f-bryan.github.io/rust-ffi-guide/dynamic_loading.html. DIDKit CLI and HTTP could look for dynamic libraries on the filesystem and load DID method implementations from them.
Programmatic usage could allow Rust users to bring their own DID method trait implementations and register them in SSI/DIDKit at runtime for later usage. For FFIs, these implementations could be referenced with C ABI symbols like with dynamic libraries, and/or we could enable implementing the trait with the respective native languages.
In addition to DID methods, we could also consider modularizing other things that have well-defined interfaces and/or registries of specifications, e.g.:
We already have a trait for linked data proof types. A plugin or module system could allow loading multiple trait implementations from dynamic libraries.
No space left on device
in this run: https://github.com/spruceid/didkit/actions/runs/664432771
Related: actions/runner-images#709
Possible solutions:
This CI run failed due to a timeout in a test that involves resolving a did:web
: https://github.com/spruceid/didkit/runs/2142847953#step:14:53
● Verify Credential API - Interop › Can verify verifiable credential Permanent Resident Card, with issuer DID method did:web:vc.transmute.world and linked data proof suite Ed25519Signature2018, fixture case-5 › should fail with mutated proof value
: Timeout - Async callback was not invoked within the 20000 ms timeout specified by jest.setTimeout.Timeout - Async callback was not invoked within the 20000 ms timeout specified by jest.setTimeout.Error:
Rerunning the build, it passes that step without error: https://github.com/spruceid/didkit/runs/2142932722
Should we try to prevent this type of failure by making did:web
resolution use a local cached copy for testing?
Edit: reqwest
uses the https_proxy
environmental variable by default. So we could use a HTTP proxy to intercept the did:web
request. But we would still need to disable TLS in the request, or make the client request a different URL, or make the client accept a self-signed HTTPS certificate. That will probably need a CLI option or env var for didkit-http
. Or we could make didkit-http
use a cache directory on the filesystem for requests, and pre-populate the directory with our cached copy of the DID document. Maybe change the didkit-http
-r
fallback resolver option to override the built-in did:web
support?
Seen here but seems unrelated to these PRs:
https://github.com/spruceid/didkit/pull/86/checks?check_run_id=1992549199#step:23:186
https://github.com/spruceid/didkit/pull/85/checks?check_run_id=1992417392#step:23:186
Excerpt from CI:
cd wasm/loader && npm run build
> [email protected] build /home/runner/work/didkit/didkit/didkit/lib/wasm/loader
> webpack --mode development
asset didkit-loader.min.js 8.49 KiB [emitted] [minimized] (name: didkit-loader) 2 related assets
runtime modules 1.08 KiB 4 modules
cacheable modules 17 KiB
./didkit-loader.js 285 bytes [built] [code generated]
../pkg/didkit_wasm.js 16.7 KiB [built] [code generated]
ERROR in ../pkg/didkit_wasm.js 372:16-63
Module not found: Error: Can't resolve 'wasm-loader' in '/home/runner/work/didkit/didkit/didkit/lib/wasm/loader'
@ ./didkit-loader.js 4:0-54 8:21-27 10:8-12 11:18-24 12:9-15
We should be assuming that the user doesn't have anything but the base operating system, and we should recommend what packages, JDK versions, Java compiler, etc. are necessary to get going.
If we need more requirements for didkit itself within /lib
, then that's where the documentation should live.
Describe the bug
When building DIDKit according to the instructions found here, I encountered to errors relating to duplication of definitions of sha256 and generate_ed25519. These definitions are actually found in the ssi crate and is causing the building process to fail.
These are the outputs:
error[E0428]: the name `sha256` is defined multiple times
--> /home/mikerah/Documents/Projects/ssi/src/hash.rs:13:1
|
4 | pub fn sha256(data: &[u8]) -> Result<[u8; 32], Error> {
| ----------------------------------------------------- previous definition of the value `sha256` here
...
13 | pub fn sha256(data: &[u8]) -> Result<[u8; 32], Error> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ `sha256` redefined here
|
= note: `sha256` must be defined only once in the value namespace of this module
error[E0201]: duplicate definitions with name `generate_ed25519`:
--> /home/mikerah/Documents/Projects/ssi/src/jwk.rs:206:5
|
179 | / pub fn generate_ed25519() -> Result<JWK, Error> {
180 | | use ring::signature::KeyPair;
181 | | let rng = ring::rand::SystemRandom::new();
182 | | let doc = ring::signature::Ed25519KeyPair::generate_pkcs8(&rng)?;
... |
202 | | })
203 | | }
| |_____- previous definition of `generate_ed25519` here
...
206 | / pub fn generate_ed25519() -> Result<JWK, Error> {
207 | | let mut csprng = rand::rngs::OsRng {};
208 | | let keypair = ed25519_dalek::Keypair::generate(&mut csprng);
209 | | let sk_bytes = keypair.secret.to_bytes();
... |
225 | | })
226 | | }
| |_____^ duplicate definition
To Reproduce
Steps from DIDKit docs
Expected behavior
Expected cargo build to complete without any errors.
Desktop (please complete the following information):
Specifically, cargo seems to need:
openssl-dev
/libssl-dev
pkg-config
$ python3 -m pip install django-qr-code django
should be
$ python3 -m pip install django-qr-code django didkit
Is your feature request related to a problem? Please describe.
I'm frustrated when the CI test suite takes 30m to run, but I do appreciate how extensive it is across different platforms.
Describe the solution you'd like
Faster CI build and test times.
Describe alternatives you've considered
Host our own runner? Parallel runners?
Is your feature request related to a problem? Please describe.
I have a did:key
already in my Rust-program, and didkit
requires a .jwk
-file, for me to do anything. I need an easy way to convert my did:key
to a .jwk
-file.
Describe the solution you'd like
I would like a new command which mirrors the existing didkit key-to-did-key -k <path to jwk>
:
Demo of what I would like:
didkit did-key-to-key <did:key>
Should return a jwk
derived from the did:key
.
Describe alternatives you've considered
didkit
for VC's.did:key
as input, to generate a jwk
.Publish didkit-node
and/or didkit-wasm
npm packages.
Depends on: #77
As discussed in w3c-ccg/vc-api#111, it seems that verifying a presentation in vc-http-api
is supposed to include verifying embedded verifiable credentials rather than expecting the client to verify them separately. DIDKit should probably be updated to follow this behavior in its API.
As we have seen in #15 , we're encountering some difficulties with the cross-platform portability of cryptographic libraries, especially those written in assembly-only like in ring
. Would it be possible to allow end users to pick their "cryptography engine" satisfying certain interfaces?
This would also be useful in native mobile applications. For example, Android and iOS both support operations on the P-256 curve and ECDSA. Could applications using DIDKit for that platform configure DIDKit to use native functions for that curve & scheme? Another example is substituting JavaScript transpiled cryptographic libraries for those Ring libraries that are assembly-only and cannot be cross-compiled to WASM with today's tooling.
https://github.com/spruceid/didkit/runs/2231223180#step:5:475
error[E0433]: failed to resolve: could not find `addr_of` in `ptr`
--> /home/rust/.cargo/registry/src/github.com-1ecc6299db9ec823/anyhow-1.0.40/src/error.rs:606:14
|
606 | ptr::addr_of!((*unerased.as_ptr())._object) as *mut E,
| ^^^^^^^ could not find `addr_of` in `ptr`
error[E0433]: failed to resolve: could not find `addr_of` in `ptr`
--> /home/rust/.cargo/registry/src/github.com-1ecc6299db9ec823/anyhow-1.0.40/src/error.rs:647:22
|
647 | ptr::addr_of!((*unerased.as_ptr())._object) as *mut E,
| ^^^^^^^ could not find `addr_of` in `ptr`
error: aborting due to 2 previous errors
For more information about this error, try `rustc --explain E0433`.
error: could not compile `anyhow`
To learn more, run the command again with --verbose.
warning: build failed, waiting for other jobs to finish...
error: build failed
Possible solution: update the Rust toolchain, as in sailfishos-mirror/rust-syn@17332d9
I tried playing around with the different commands (after aliasing the install location)
with the following results:
$ didkit generate-ed25519-key >> my_key
$ cat my_key
{"kty":"OKP","crv":"Ed25519","x":"Vf0ooutKnYTANiSEduPcfZ06QIgnLuw0NAXRbPzWlCY","d":"2NCKy2GkJTxkKxrhWP_KBdHvsD6_MKYyYT8Gtt04ndo"}
$ didkit key-to-did-key --key /path/to/my_key
did:key:z6MkkEvbSggwUFGeeg6ZW2cf9pQBWU25i4sDHjn8VHcVP6Qm
Which ran as I expected, I tried:
$ didkit vc-issue-credential --key /path/to/my_key
Which just hung until I killed it, both on main
and http-jni-docs
.
I also tried posting the key to didkit-http
, and it returned an error, internal server error Missing key
. Digging in, that makes sense, the key from above doesn't conform to what was expected, but I am curious what I should've been doing to produce the input to the vc-issue-credential
route (though not related to this issue).
Our did:onion
implementation (#125) expects a local SOCKS5h proxy at 127.0.0.1:9050
. In the Docker images this is not available. Should the Docker images be updated to include Tor? Or should users of the Docker image be expected to bring their own Tor if they want to use did:onion
? Should there be a CLI option, env var, and/or resolution input option for the Tor proxy host/port/URL for did:onion
to use instead of socks5h://127.0.0.1:9050
?
openssl
and ring
are both included in the default build now, since hyper-tls
uses native-tls
which uses openssl
(on Linux). Should we try to minimize this duplication of functionality? It increases build times and space usage, and may increase the binary sizes). We could switch from hyper-tls
to hyper-rustls
, as rustls
uses ring
. Or we could switch from ring
to openssl
, since openssl
includes RSA functionality. Or support both ring
and openssl
but default to one or the other (this increases complexity of feature flags, but may be useful). I'm not sure how ring
and openssl
compare or what use cases will require one or the other. I heard that openssl
has a better RSA implementation. It also may require additional dependencies: openssl-dev
and pkg-config
, as mentioned in #46; or Perl and Clang if using the vendored feature - as mentioned in the readme under lib/android
.
wasm2js
completes, but webpack
takes several minutes to run and then exits due to out of memory.
$ make -C lib ../target/test/asmjs.stamp
make: Entering directory '/home/cel/src/didkit/lib'
cd wasm && wasm-pack build --target bundler
[INFO]: Checking for the Wasm target...
[INFO]: Compiling to Wasm...
Updating git repository `https://github.com/timothee-haudebourg/json-rust`
Finished release [optimized] target(s) in 5.67s
[INFO]: Installing wasm-bindgen...
[INFO]: Optional fields missing from Cargo.toml: 'description', 'repository', and 'license'. These are not necessary, but recommended
[INFO]: :-) Done in 6.04s
[INFO]: :-) Your wasm pkg is ready to publish at /home/cel/src/didkit/lib/wasm/pkg.
cd wasm/pkg && PATH="$PATH:"/home/cel/binaryen"/bin" wasm2js --pedantic -o didkit_wasm_bg1.js didkit_wasm_bg.wasm
cd wasm/asm && ./repack.sh
npm --prefix wasm/asm install
npm WARN [email protected] No description
npm WARN [email protected] No repository field.
npm WARN [email protected] No license field.
added 124 packages from 158 contributors in 32.962s
14 packages are looking for funding
run `npm fund` for details
npm --prefix wasm/asm run build
> [email protected] build /home/cel/src/didkit/lib/wasm/asm
> webpack
events.js:291
throw er; // Unhandled 'error' event
^
Error [ERR_WORKER_OUT_OF_MEMORY]: Worker terminated due to reaching memory limit: JS heap out of memory
at Worker.[kOnExit] (internal/worker.js:229:26)
at Worker.<computed>.onexit (internal/worker.js:165:20)
Emitted 'error' event on Worker instance at:
at Worker.[kOnExit] (internal/worker.js:229:12)
at Worker.<computed>.onexit (internal/worker.js:165:20) {
code: 'ERR_WORKER_OUT_OF_MEMORY'
}
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] build: `webpack`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] build script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/cel/.npm/_logs/2021-03-02T03_20_38_588Z-debug.log
make: *** [Makefile:174: wasm/asm/didkit-asm.min.js] Error 1
make: Leaving directory '/home/cel/src/didkit/lib'
Describe the bug
Building fails in Ubuntu 20.04 with the following message:
error: couldn't read src/../json-ld/src/lib.rs: No such file or directory (os error 2)
--> src/lib.rs:26:1
|
26 | mod json_ld;
| ^^^^^^^^^^^^
error: aborting due to previous error
error: could not compile `ssi`
To Reproduce
Steps to reproduce the behavior:
Go through the installation process explained here
Expected behavior
The building process should not fail
Desktop (please complete the following information):
My lsb_release -a command gives the following:
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.2 LTS
Release: 20.04
Codename: focal
The default browser is firefox 87.0
Hi!
From what I understood from the documentation, the server that is started with DIDKit-http will only use the provided keys for issuing a credential.
I wanted to ask is there support or a plan to add support for making this service work for more than one issuer? What I mean by this is e.g to have a single server running on port 9999 such that when a credential is to be issued, the key(s) that is/are provided initially are not the only ones that can be used for signing it, but some other key (for example one that could be sent with the request body) .
If I got it right, the only way to currently achieve using one server for many issuers is if multiple keys are passed on start, which means you have to know how many issuers you might have before starting the server.
(Please correct me in case I am wrong and have missed smth from the documentation.)
How can we use SJCL to reduce binary size for JS/WASM builds, increase user optionality, and improve security? How would we have our WASM call out to external crypto? I imagine we'll need to support similar workflows, such as for HSM signing and native P-256 modules. I realize we have already enabled external signing with DIDKit--wondering about the shape of a cryptographic trait system that can be satisfied by multiple backends dependent on platform & support.
For example, the installation instructions and code here can be made into a test:
https://github.com/spruceid/didkit/tree/main/examples/java-springboot
Perhaps we can move that specific one to its own repo, but we already test the installation instructions from the main README, for example. If we extracted the code example or command line instructions on installation into the CI directly, it would ensure that broken documentation also breaks our CI, which is a good thing.
First step here would be to document where all our README files are, and what elements are testable.
We could add a driver for Universal Resolver, like this: https://github.com/decentralized-identity/universal-resolver/pull/100/files
It looks like there is already a Universal Resolver driver using ghcr.io
, so I think we could use our existing Docker image ghcr.io/spruceid/didkit-http
. DIDKit's HTTP interface already supports the DID Resolution HTTP(S) binding which is what Universal Resolver uses for the interface to drivers.
Universal Resolver already has drivers for did:key
, did:web
, and did:ethr
. DID Methods that we could add in our driver are did:tz
, did:pkh
, did:sol
, and did:onion
. Additional work may be needed to enable did:onion
since it depends on an external network, Tor: #137
I suspect it has to do with a BSD-specific issue with the jni
crate.
Please see the following build log for system information and reproduction:
$ git clone [email protected]:spruceid/ssi.git
$ git clone [email protected]:spruceid/didkit.git
$ cd didkit/
$ uname -vp
FreeBSD 12.2-RELEASE r366954 GENERIC amd64
$ cargo build
Compiling jni v0.17.0
error[E0425]: cannot find value `EXPECTED_JVM_FILENAME` in this scope
--> /home/wayne/.cargo/registry/src/github.com-1ecc6299db9ec823/jni-0.17.0/build.rs:97:25
|
97 | if file_name == EXPECTED_JVM_FILENAME {
| ^^^^^^^^^^^^^^^^^^^^^ not found in this scope
error: aborting due to previous error
For more information about this error, try `rustc --explain E0425`.
error: could not compile `jni`
To learn more, run the command again with --verbose.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.