Git Product home page Git Product logo

hyperledger / aries-cloudagent-python Goto Github PK

View Code? Open in Web Editor NEW
395.0 29.0 491.0 55.25 MB

Hyperledger Aries Cloud Agent Python (ACA-Py) is a foundation for building decentralized identity applications and services running in non-mobile environments.

Home Page: https://wiki.hyperledger.org/display/aries

License: Apache License 2.0

Python 99.75% Shell 0.20% JavaScript 0.01% Dockerfile 0.05%
verifiable-organizations-network hyperledger-indy indy-catalyst von hyperledger verifiable-credentials aries trust-over-ip verifiable-origins aca-py

aries-cloudagent-python's Introduction

Hyperledger Aries Cloud Agent - Python

pypi releases codecov

An easy to use Aries agent for building SSI services using any language that supports sending/receiving HTTP requests.

Full access to an organized set of all of the ACA-Py documents is available at https://aca-py.org. Check it out! It's much easier to navigate than this GitHub repo for reading the documentation.

Overview

Hyperledger Aries Cloud Agent Python (ACA-Py) is a foundation for building Verifiable Credential (VC) ecosystems. It operates in the second and third layers of the Trust Over IP framework (PDF) using DIDComm messaging and Hyperledger Aries protocols. The "cloud" in the name means that ACA-Py runs on servers (cloud, enterprise, IoT devices, and so forth), and is not designed to run on mobile devices.

ACA-Py is built on the Aries concepts and features that make up Aries Interop Profile (AIP) 2.0. ACA-Py’s supported Aries protocols include, most importantly, protocols for issuing, verifying, and holding verifiable credentials using both Hyperledger AnonCreds verifiable credential format, and the W3C Standard Verifiable Credential Data Model format using JSON-LD with LD-Signatures and BBS+ Signatures. Coming soon -- issuing and presenting Hyperledger AnonCreds verifiable credentials using the W3C Standard Verifiable Credential Data Model format.

To use ACA-Py you create a business logic controller that "talks to" an ACA-Py instance (sending HTTP requests and receiving webhook notifications), and ACA-Py handles the Aries and DIDComm protocols and related functionality. Your controller can be built in any language that supports making and receiving HTTP requests; knowledge of Python is not needed. Together, this means you can focus on building VC solutions using familiar web development technologies, instead of having to learn the nuts and bolts of low-level cryptography and Trust over IP-type Aries protocols.

This checklist-style overview document provides a full list of the features in ACA-Py. The following is a list of some of the core features needed for a production deployment, with a link to detailed information about the capability.

Multi-Tenant

ACA-Py supports "multi-tenant" scenarios. In these scenarios, one (scalable) instance of ACA-Py uses one database instance, and are together capable of managing separate secure storage (for private keys, DIDs, credentials, etc.) for many different actors. This enables (for example) an "issuer-as-a-service", where an enterprise may have many VC issuers, each with different identifiers, using the same instance of ACA-Py to interact with VC holders as required. Likewise, an ACA-Py instance could be a "cloud wallet" for many holders (e.g. people or organizations) that, for whatever reason, cannot use a mobile device for a wallet. Learn more about multi-tenant deployments here.

Mediator Service

Startup options allow the use of an ACA-Py as an Aries mediator using core Aries protocols to coordinate its mediation role. Such an ACA-Py instance receives, stores and forwards messages to Aries agents that (for example) lack an addressable endpoint on the Internet such as a mobile wallet. A live instance of a public mediator based on ACA-Py is available here from Indicio Technologies. Learn more about deploying a mediator here. See the Aries Mediator Service for a "best practices" configuration of an Aries mediator.

Indy Transaction Endorsing

ACA-Py supports a Transaction Endorsement protocol, for agents that don't have write access to an Indy ledger. Endorser support is documented here.

Scaled Deployments

ACA-Py supports deployments in scaled environments such as in Kubernetes environments where ACA-Py and its storage components can be horizontally scaled as needed to handle the load.

VC-API Endpoints

A set of endpoints conforming to the vc-api specification are included to manage w3c credentials and presentations. They are documented here and a postman demo is available here.

Example Uses

The business logic you use with ACA-Py is limited only by your imagination. Possible applications include:

  • An interface to a legacy system to issue verifiable credentials
  • An authentication service based on the presentation of verifiable credential proofs
  • An enterprise wallet to hold and present verifiable credentials about that enterprise
  • A user interface for a person to use a wallet not stored on a mobile device
  • An application embedded in an IoT device, capable of issuing verifiable credentials about collected data
  • A persistent connection to other agents that enables secure messaging and notifications
  • Custom code to implement a new service.

Getting Started

For those new to SSI, Aries and ACA-Py, there are a couple of Linux Foundation edX courses that provide a good starting point.

The latter is the most useful for developers wanting to get a solid basis in using ACA-Py and other Aries Frameworks.

Also included here is a much more concise (but less maintained) Getting Started Guide that will take you from knowing next to nothing about decentralized identity to developing Aries-based business apps and services. You’ll run an Indy ledger (with no ramp-up time), ACA-Py apps and developer-oriented demos. The guide has a table of contents so you can skip the parts you already know.

Understanding the Architecture

There is an architectural deep dive webinar presented by the ACA-Py team, and slides from the webinar are also available. The picture below gives a quick overview of the architecture, showing an instance of ACA-Py, a controller and the interfaces between the controller and ACA-Py, and the external paths to other agents and public ledgers on the Internet.

drawing

You can extend ACA-Py using plug-ins, which can be loaded at runtime. Plug-ins are mentioned in the webinar and are described in more detail here. An ever-expanding set of ACA-Py plugins can be found in the Aries ACA-Py Plugins repository. Check them out -- it might have the very plugin you need!

Installation and Usage

Use the "install and go" page for developers if you are comfortable with Trust over IP and Aries concepts. ACA-Py can be run with Docker without installation (highly recommended), or can be installed from PyPi. In the repository /demo folder there is a full set of demos for developers to use in getting up to speed quickly. Start with the Traction Workshop to go through a complete ACA-Py-based Issuer-Holder-Verifier flow in about 20 minutes. Next, the Alice-Faber Demo is a great way for developers try a zero-install example of how to use the ACA-Py API to operate a couple of Aries Agents. The Read the Docs overview is also a way to understand the internal modules and APIs that make up an ACA-Py instance.

If you would like to develop on ACA-Py locally note that we use Poetry for dependency management and packaging, if you are unfamiliar with poetry please see our cheat sheet

About the ACA-Py Admin API

The overview of ACA-Py’s API is a great starting place for learning about the ACA-Py API when you are starting to build your own controller.

An ACA-Py instance puts together an OpenAPI-documented REST interface based on the protocols that are loaded. This is used by a controller application (written in any language) to manage the behavior of the agent. The controller can initiate actions (e.g. issuing a credential) and can respond to agent events (e.g. sending a presentation request after a connection is accepted). Agent events are delivered to the controller as webhooks to a configured URL.

Technical note: the administrative API exposed by the agent for the controller to use must be protected with an API key (using the --admin-api-key command line arg) or deliberately left unsecured using the --admin-insecure-mode command line arg. The latter should not be used other than in development if the API is not otherwise secured.

Troubleshooting

There are a number of resources for getting help with ACA-Py and troubleshooting any problems you might run into. The Troubleshooting document contains some guidance about issues that have been experienced in the past. Feel free to submit PRs to supplement the troubleshooting document! Searching the ACA-Py GitHub issues may uncovers challenges you are having that others have experienced, often with solutions. As well, there is the "aries-cloudagent-python" channel on the Hyperledger Discord chat server (invitation here).

Credit

The initial implementation of ACA-Py was developed by the Government of British Columbia’s Digital Trust Team in Canada. To learn more about what’s happening with decentralized identity and digital trust in British Columbia, checkout the BC Digital Trust website.

See the MAINTAINERS.md file for a list of the current ACA-Py maintainers, and the guidelines for becoming a Maintainer. We'd love to have you join the team if you are willing and able to carry out the duties of a Maintainer.

Contributing

Pull requests are welcome! Please read our contributions guide and submit your PRs. We enforce developer certificate of origin (DCO) commit signing — guidance on this is available. We also welcome issues submitted about problems you encounter in using ACA-Py.

License

Apache License Version 2.0

aries-cloudagent-python's People

Contributors

acuderman avatar amanji avatar andrewwhitehead avatar baegjae avatar burdettadam avatar chumbert avatar cjhowland avatar dbluhm avatar dependabot[bot] avatar dkulic avatar esune avatar ff137 avatar frostyfrog avatar gavinok avatar harshmultani-ayanworks avatar ianco avatar jamshale avatar jsyro avatar mepeltier avatar nrempel avatar patstlouis avatar pengyuc avatar sarthakvijayvergiya avatar shaangill025 avatar sklump avatar swcurran avatar telegramsam avatar timoglastra avatar usingtechnology avatar wadebarnes avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aries-cloudagent-python's Issues

aca-py failing to open wallet

This is with the latest master release of aca-py. I had the same problem with the previous release so its not a new bug but I pulled the latest version to see if the new version fixed it but it didn't. The bug is triggered when I supply a --seed parameter. It does not error when I do not supply a seed but then I can't create any schema etc on the ledger because no public DID.

Here is the invocation and debug console output:

aca-py start \
>         --inbound-transport http 0.0.0.0 8100 \
>         --outbound-transport http \
>         --log-level debug \
>         --endpoint http://localhost:8100 \
>         --label leo \
>         --seed 0123456789ABCDEF0123456789ABCDEF \
>         --genesis-url http://localhost:9000/genesis \
>         --pool-name localindypool \
>         --admin 0.0.0.0 8150 --admin-insecure-mode \
>         --public-invites \
>         --auto-accept-invites \
>         --auto-accept-requests \
>         --auto-ping-connection \
>         --auto-respond-messages \
>         --auto-respond-credential-offer \
>         --auto-respond-presentation-request \
>         --auto-verify-presentation 

2019-08-01 11:44:09,229 asyncio ERROR Task exception was never retrieved
future: <Task finished coro=<start_app() done, defined at /Data/Code/public/hyperledger/aries/cloudagentpy/aries_cloudagent/commands/start.py:16> exception=TypeError('an integer is required (got type NoneType)')>
Traceback (most recent call last):
  File "/Data/Code/public/hyperledger/aries/cloudagentpy/aries_cloudagent/commands/start.py", line 19, in start_app
    await conductor.start()
  File "/Data/Code/public/hyperledger/aries/cloudagentpy/aries_cloudagent/conductor.py", line 171, in start
    await ledger.update_endpoint_for_did(public_did, endpoint)
  File "/Data/Code/public/hyperledger/aries/cloudagentpy/aries_cloudagent/ledger/indy.py", line 465, in update_endpoint_for_did
    exist_endpoint = await self.get_endpoint_for_did(did)
  File "/Data/Code/public/hyperledger/aries/cloudagentpy/aries_cloudagent/ledger/indy.py", line 448, in get_endpoint_for_did
    response_json = await self._submit(request_json, sign=bool(public_did))
  File "/Data/Code/public/hyperledger/aries/cloudagentpy/aries_cloudagent/ledger/indy.py", line 207, in _submit
    request_result_json = await submit_op
  File "/Data/Code/public/hyperledger/indy/indy-sdk/wrappers/python/indy/ledger.py", line 39, in sign_and_submit_request
    c_wallet_handle = c_int32(wallet_handle)
TypeError: an integer is required (got type NoneType)

Attached is the full console output.

aca-py start \
>         --inbound-transport http 0.0.0.0 8100 \
>         --outbound-transport http \
>         --log-level debug \
>         --endpoint http://localhost:8100 \
>         --label leo \
>         --seed 0123456789ABCDEF0123456789ABCDEF \
>         --genesis-url http://localhost:9000/genesis \
>         --pool-name localindypool \
>         --admin 0.0.0.0 8150 --admin-insecure-mode \
>         --public-invites \
>         --auto-accept-invites \
>         --auto-accept-requests \
>         --auto-ping-connection \
>         --auto-respond-messages \
>         --auto-respond-credential-offer \
>         --auto-respond-presentation-request \
>         --auto-verify-presentation 
2019-08-01 11:44:09,108 asyncio DEBUG Using selector: KqueueSelector
2019-08-01 11:44:09,127 aries_cloudagent.wallet.provider INFO Opening wallet type: basic
2019-08-01 11:44:09,145 aries_cloudagent.ledger.indy DEBUG Opening the pool ledger
2019-08-01 11:44:09,145 indy.pool DEBUG set_protocol_version: >>> protocol_version: 2
2019-08-01 11:44:09,146 indy.pool DEBUG set_protocol_version: Creating callback
2019-08-01 11:44:09,146 indy.libindy DEBUG create_cb: >>> cb_type: <class 'ctypes.CFUNCTYPE.<locals>.CFunctionType'>
2019-08-01 11:44:09,146 indy.libindy DEBUG create_cb: <<< res: <CFunctionType object at 0x10bd65530>
2019-08-01 11:44:09,146 indy.libindy DEBUG do_call: >>> name: indy_set_protocol_version, args: (2, <CFunctionType object at 0x10bd65530>)
2019-08-01 11:44:09,146 indy.libindy DEBUG _load_cdll: >>>
2019-08-01 11:44:09,146 indy.libindy DEBUG _load_cdll: Detected OS name: darwin
2019-08-01 11:44:09,146 indy.libindy DEBUG _load_cdll: Resolved libindy name is: libindy.dylib
2019-08-01 11:44:09,155 indy.libindy DEBUG _load_cdll: <<< res: <CDLL 'libindy.dylib', handle 7fdba145be80 at 0x10bdc9490>
2019-08-01 11:44:09,156 indy.libindy DEBUG set_logger: >>>
2019-08-01 11:44:09,156 indy.libindy DEBUG do_call_sync: >>> name: indy_set_logger, args: (None, None, <CFunctionType object at 0x10bd65600>, None)
2019-08-01 11:44:09,156 indy.libindy DEBUG do_call_sync: <<< 0
2019-08-01 11:44:09,156 indy.libindy DEBUG set_logger: <<<
2019-08-01 11:44:09,157 indy.libindy.native.command_executor INFO 	src/commands/mod.rs:99 | Worker thread started
2019-08-01 11:44:09,157 indy.libindy DEBUG do_call: Function indy_set_protocol_version returned err: 0
2019-08-01 11:44:09,157 indy.libindy DEBUG do_call: <<< <Future pending>
2019-08-01 11:44:09,157 indy.libindy.native.indy.commands INFO 	src/commands/mod.rs:140 | PoolCommand command received
2019-08-01 11:44:09,157 indy.libindy.native.pool_command_executor INFO 	src/commands/pool.rs:131 | SetProtocolVersion command received
2019-08-01 11:44:09,158 indy.libindy.native.indy.commands.pool DEBUG 	src/commands/pool.rs:224 | set_protocol_version >>> version: 2
2019-08-01 11:44:09,158 indy.libindy.native.indy.commands.pool DEBUG 	src/commands/pool.rs:232 | set_protocol_version <<<
2019-08-01 11:44:09,158 indy.libindy DEBUG _indy_callback: >>> command_handle: 0, err , args: ()
2019-08-01 11:44:09,158 indy.libindy DEBUG _indy_callback: <<<
2019-08-01 11:44:09,158 indy.libindy DEBUG _indy_loop_callback: >>> command_handle: 0, err , args: ()
2019-08-01 11:44:09,158 indy.libindy DEBUG _indy_loop_callback: Function returned None
2019-08-01 11:44:09,158 indy.libindy DEBUG _indy_loop_callback <<<
2019-08-01 11:44:09,158 indy.pool DEBUG set_protocol_version: <<< res: None
2019-08-01 11:44:09,158 aries_cloudagent.ledger.indy DEBUG Creating pool ledger...
2019-08-01 11:44:09,158 indy.pool DEBUG create_pool_ledger_config: >>> config_name: 'default', config: '{"genesis_txn": "/var/folders/w0/gxdpc9094l96snjf_trkgrfm0001z4/T/indy_genesis_transactions.txt"}'
2019-08-01 11:44:09,158 indy.pool DEBUG create_pool_ledger_config: Creating callback
2019-08-01 11:44:09,158 indy.libindy DEBUG create_cb: >>> cb_type: <class 'ctypes.CFUNCTYPE.<locals>.CFunctionType'>
2019-08-01 11:44:09,158 indy.libindy DEBUG create_cb: <<< res: <CFunctionType object at 0x10bd65940>
2019-08-01 11:44:09,159 indy.libindy DEBUG do_call: >>> name: indy_create_pool_ledger_config, args: (c_char_p(4493990032), c_char_p(4493641776), <CFunctionType object at 0x10bd65940>)
2019-08-01 11:44:09,159 indy.libindy DEBUG do_call: Function indy_create_pool_ledger_config returned err: 0
2019-08-01 11:44:09,159 indy.libindy DEBUG do_call: <<< <Future pending>
2019-08-01 11:44:09,159 indy.libindy.native.indy.commands INFO 	src/commands/mod.rs:140 | PoolCommand command received
2019-08-01 11:44:09,159 indy.libindy.native.pool_command_executor INFO 	src/commands/pool.rs:62 | Create command received
2019-08-01 11:44:09,159 indy.libindy.native.indy.commands.pool DEBUG 	src/commands/pool.rs:138 | create >>> name: "default", config: Some(PoolConfig { genesis_txn: "/var/folders/w0/gxdpc9094l96snjf_trkgrfm0001z4/T/indy_genesis_transactions.txt" })
2019-08-01 11:44:09,160 indy.libindy DEBUG _get_error_details: >>>
2019-08-01 11:44:09,160 indy.libindy DEBUG _get_error_details: <<< error_details: {'backtrace': '', 'message': 'Error: Pool ledger config already exists\n  Caused by: Pool ledger config file with name "default" already exists\n'}
2019-08-01 11:44:09,160 indy.libindy DEBUG _indy_callback: >>> command_handle: 1, err , args: ()
2019-08-01 11:44:09,160 indy.libindy DEBUG _indy_callback: <<<
2019-08-01 11:44:09,160 indy.libindy DEBUG _indy_loop_callback: >>> command_handle: 1, err , args: ()
2019-08-01 11:44:09,160 indy.libindy WARNING _indy_loop_callback: Function returned error 
2019-08-01 11:44:09,160 indy.libindy DEBUG _indy_loop_callback <<<
2019-08-01 11:44:09,161 aries_cloudagent.ledger.indy DEBUG Pool ledger already created.
2019-08-01 11:44:09,161 indy.pool DEBUG open_pool_ledger: >>> config_name: 'default', config: '{}'
2019-08-01 11:44:09,161 indy.pool DEBUG open_pool_ledger: Creating callback
2019-08-01 11:44:09,161 indy.libindy DEBUG create_cb: >>> cb_type: <class 'ctypes.CFUNCTYPE.<locals>.CFunctionType'>
2019-08-01 11:44:09,161 indy.libindy DEBUG create_cb: <<< res: <CFunctionType object at 0x10bd65bb0>
2019-08-01 11:44:09,161 indy.libindy DEBUG do_call: >>> name: indy_open_pool_ledger, args: (c_char_p(4491019760), c_char_p(4493989648), <CFunctionType object at 0x10bd65bb0>)
2019-08-01 11:44:09,162 indy.libindy DEBUG do_call: Function indy_open_pool_ledger returned err: 0
2019-08-01 11:44:09,162 indy.libindy DEBUG do_call: <<< <Future pending>
2019-08-01 11:44:09,162 indy.libindy.native.indy.commands INFO 	src/commands/mod.rs:140 | PoolCommand command received
2019-08-01 11:44:09,162 indy.libindy.native.pool_command_executor INFO 	src/commands/pool.rs:70 | Open command received
2019-08-01 11:44:09,162 indy.libindy.native.indy.commands.pool DEBUG 	src/commands/pool.rs:158 | open >>> name: "default", config: Some(PoolOpenConfig { timeout: 20, extended_timeout: 60, conn_limit: 5, conn_active_timeout: 5, preordered_nodes: [] })
2019-08-01 11:44:09,163 indy.libindy.native.indy.commands.pool DEBUG 	src/commands/pool.rs:172 | open <<<
2019-08-01 11:44:09,179 indy.libindy.native.indy.services.pool.networker DEBUG 	src/services/pool/networker.rs:332 | _get_socket: open new socket for node 0
2019-08-01 11:44:09,179 indy.libindy.native.indy.services.pool.networker DEBUG 	src/services/pool/networker.rs:332 | _get_socket: open new socket for node 1
2019-08-01 11:44:09,179 indy.libindy.native.indy.services.pool.networker DEBUG 	src/services/pool/networker.rs:332 | _get_socket: open new socket for node 2
2019-08-01 11:44:09,180 indy.libindy.native.indy.services.pool.networker DEBUG 	src/services/pool/networker.rs:332 | _get_socket: open new socket for node 3
2019-08-01 11:44:09,226 indy.libindy.native.indy.commands INFO 	src/commands/mod.rs:140 | PoolCommand command received
2019-08-01 11:44:09,226 indy.libindy.native.indy.commands.pool INFO 	src/commands/pool.rs:74 | OpenAck handle 2, pool_id 2, result Ok(())
2019-08-01 11:44:09,226 indy.libindy DEBUG _indy_callback: >>> command_handle: 2, err , args: (2,)
2019-08-01 11:44:09,226 indy.libindy DEBUG _indy_callback: <<<
2019-08-01 11:44:09,226 indy.libindy DEBUG _indy_loop_callback: >>> command_handle: 2, err , args: (2,)
2019-08-01 11:44:09,226 indy.libindy DEBUG _indy_loop_callback: Function returned 2
2019-08-01 11:44:09,226 indy.libindy DEBUG _indy_loop_callback <<<
2019-08-01 11:44:09,226 indy.pool DEBUG open_pool_ledger: <<< res: 2
2019-08-01 11:44:09,227 indy.ledger DEBUG build_get_attrib_request: >>> submitter_did: '3avoBCqDMFHFaKUHug9s8W', target_did: '3avoBCqDMFHFaKUHug9s8W', raw: 'endpoint', xhash: None, enc: None
2019-08-01 11:44:09,227 indy.ledger DEBUG build_get_attrib_request: Creating callback
2019-08-01 11:44:09,227 indy.libindy DEBUG create_cb: >>> cb_type: <class 'ctypes.CFUNCTYPE.<locals>.CFunctionType'>
2019-08-01 11:44:09,227 indy.libindy DEBUG create_cb: <<< res: <CFunctionType object at 0x10bd65e20>
2019-08-01 11:44:09,227 indy.libindy DEBUG do_call: >>> name: indy_build_get_attrib_request, args: (c_char_p(4493632912), c_char_p(4493633040), c_char_p(4493990224), None, None, <CFunctionType object at 0x10bd65e20>)
2019-08-01 11:44:09,227 indy.libindy DEBUG do_call: Function indy_build_get_attrib_request returned err: 0
2019-08-01 11:44:09,227 indy.libindy DEBUG do_call: <<< <Future pending>
2019-08-01 11:44:09,227 indy.libindy.native.indy.commands INFO 	src/commands/mod.rs:136 | LedgerCommand command received
2019-08-01 11:44:09,227 indy.libindy.native.ledger_command_executor INFO 	src/commands/ledger.rs:343 | BuildGetAttribRequest command received
2019-08-01 11:44:09,228 indy.libindy.native.indy.commands.ledger DEBUG 	src/commands/ledger.rs:698 | build_get_attrib_request >>> submitter_did: Some("3avoBCqDMFHFaKUHug9s8W"), target_did: "3avoBCqDMFHFaKUHug9s8W", raw: Some("endpoint"), hash: None, enc: None
2019-08-01 11:44:09,228 indy.libindy.native.indy.services.ledger INFO 	src/services/ledger/mod.rs:117 | build_get_attrib_request() => Ok("{\"reqId\":1564681449228407000,\"identifier\":\"3avoBCqDMFHFaKUHug9s8W\",\"operation\":{\"type\":\"104\",\"dest\":\"3avoBCqDMFHFaKUHug9s8W\",\"raw\":\"endpoint\"},\"protocolVersion\":2}")
2019-08-01 11:44:09,228 indy.libindy.native.indy.commands.ledger DEBUG 	src/commands/ledger.rs:710 | build_get_attrib_request <<< res: "{\"reqId\":1564681449228407000,\"identifier\":\"3avoBCqDMFHFaKUHug9s8W\",\"operation\":{\"type\":\"104\",\"dest\":\"3avoBCqDMFHFaKUHug9s8W\",\"raw\":\"endpoint\"},\"protocolVersion\":2}"
2019-08-01 11:44:09,228 indy.libindy DEBUG _indy_callback: >>> command_handle: 3, err , args: (b'{"reqId":1564681449228407000,"identifier":"3avoBCqDMFHFaKUHug9s8W","operation":{"type":"104","dest":"3avoBCqDMFHFaKUHug9s8W","raw":"endpoint"},"protocolVersion":2}',)
2019-08-01 11:44:09,228 indy.libindy DEBUG _indy_callback: <<<
2019-08-01 11:44:09,228 indy.libindy DEBUG _indy_loop_callback: >>> command_handle: 3, err , args: (b'{"reqId":1564681449228407000,"identifier":"3avoBCqDMFHFaKUHug9s8W","operation":{"type":"104","dest":"3avoBCqDMFHFaKUHug9s8W","raw":"endpoint"},"protocolVersion":2}',)
2019-08-01 11:44:09,228 indy.libindy DEBUG _indy_loop_callback: Function returned b'{"reqId":1564681449228407000,"identifier":"3avoBCqDMFHFaKUHug9s8W","operation":{"type":"104","dest":"3avoBCqDMFHFaKUHug9s8W","raw":"endpoint"},"protocolVersion":2}'
2019-08-01 11:44:09,229 indy.libindy DEBUG _indy_loop_callback <<<
2019-08-01 11:44:09,229 indy.ledger DEBUG build_get_attrib_request: <<< res: '{"reqId":1564681449228407000,"identifier":"3avoBCqDMFHFaKUHug9s8W","operation":{"type":"104","dest":"3avoBCqDMFHFaKUHug9s8W","raw":"endpoint"},"protocolVersion":2}'
2019-08-01 11:44:09,229 indy.ledger DEBUG sign_and_submit_request: >>> pool_handle: 2, wallet_handle: None, submitter_did: '3avoBCqDMFHFaKUHug9s8W', request_json: '{"reqId":1564681449228407000,"identifier":"3avoBCqDMFHFaKUHug9s8W","operation":{"type":"104","dest":"3avoBCqDMFHFaKUHug9s8W","raw":"endpoint"},"protocolVersion":2}'
2019-08-01 11:44:09,229 indy.ledger DEBUG sign_and_submit_request: Creating callback
2019-08-01 11:44:09,229 indy.libindy DEBUG create_cb: >>> cb_type: <class 'ctypes.CFUNCTYPE.<locals>.CFunctionType'>
2019-08-01 11:44:09,229 indy.libindy DEBUG create_cb: <<< res: <CFunctionType object at 0x10bdb2120>
2019-08-01 11:44:09,229 asyncio ERROR Task exception was never retrieved
future: <Task finished coro=<start_app() done, defined at /Data/Code/public/hyperledger/aries/cloudagentpy/aries_cloudagent/commands/start.py:16> exception=TypeError('an integer is required (got type NoneType)')>
Traceback (most recent call last):
  File "/Data/Code/public/hyperledger/aries/cloudagentpy/aries_cloudagent/commands/start.py", line 19, in start_app
    await conductor.start()
  File "/Data/Code/public/hyperledger/aries/cloudagentpy/aries_cloudagent/conductor.py", line 171, in start
    await ledger.update_endpoint_for_did(public_did, endpoint)
  File "/Data/Code/public/hyperledger/aries/cloudagentpy/aries_cloudagent/ledger/indy.py", line 465, in update_endpoint_for_did
    exist_endpoint = await self.get_endpoint_for_did(did)
  File "/Data/Code/public/hyperledger/aries/cloudagentpy/aries_cloudagent/ledger/indy.py", line 448, in get_endpoint_for_did
    response_json = await self._submit(request_json, sign=bool(public_did))
  File "/Data/Code/public/hyperledger/aries/cloudagentpy/aries_cloudagent/ledger/indy.py", line 207, in _submit
    request_result_json = await submit_op
  File "/Data/Code/public/hyperledger/indy/indy-sdk/wrappers/python/indy/ledger.py", line 39, in sign_and_submit_request
    c_wallet_handle = c_int32(wallet_handle)
TypeError: an integer is required (got type NoneType)
2019-08-01 11:44:14,183 indy.libindy.native.zmq DEBUG 	/Users/samuel/.cargo/registry/src/github.com-1ecc6299db9ec823/zmq-0.8.3/src/lib.rs:555 | socket dropped
2019-08-01 11:44:14,183 indy.libindy.native.zmq DEBUG 	/Users/samuel/.cargo/registry/src/github.com-1ecc6299db9ec823/zmq-0.8.3/src/lib.rs:555 | socket dropped
2019-08-01 11:44:14,183 indy.libindy.native.zmq DEBUG 	/Users/samuel/.cargo/registry/src/github.com-1ecc6299db9ec823/zmq-0.8.3/src/lib.rs:555 | socket dropped
2019-08-01 11:44:14,183 indy.libindy.native.zmq DEBUG 	/Users/samuel/.cargo/registry/src/github.com-1ecc6299db9ec823/zmq-0.8.3/src/lib.rs:555 | socket dropped
2019-08-01 11:44:14,183 indy.libindy.native.zmq DEBUG 	/Users/samuel/.cargo/registry/src/github.com-1ecc6299db9ec823/zmq-0.8.3/src/lib.rs:462 | context dropped
2019-08-01 11:44:14,231 aries_cloudagent.ledger.indy DEBUG Closing pool ledger after timeout
2019-08-01 11:44:14,231 indy.pool DEBUG close_pool_ledger: >>> config_name: 2
2019-08-01 11:44:14,231 indy.pool DEBUG close_pool_ledger: Creating callback
2019-08-01 11:44:14,231 indy.libindy DEBUG create_cb: >>> cb_type: <class 'ctypes.CFUNCTYPE.<locals>.CFunctionType'>
2019-08-01 11:44:14,231 indy.libindy DEBUG create_cb: <<< res: <CFunctionType object at 0x10bdb2390>
2019-08-01 11:44:14,231 indy.libindy DEBUG do_call: >>> name: indy_close_pool_ledger, args: (c_int(2), <CFunctionType object at 0x10bdb2390>)
2019-08-01 11:44:14,232 indy.libindy DEBUG do_call: Function indy_close_pool_ledger returned err: 0
2019-08-01 11:44:14,232 indy.libindy DEBUG do_call: <<< <Future pending>
2019-08-01 11:44:14,232 indy.libindy.native.indy.commands INFO 	src/commands/mod.rs:140 | PoolCommand command received
2019-08-01 11:44:14,232 indy.libindy.native.pool_command_executor INFO 	src/commands/pool.rs:94 | Close command received
2019-08-01 11:44:14,232 indy.libindy.native.indy.commands.pool DEBUG 	src/commands/pool.rs:188 | close >>> handle: 2
2019-08-01 11:44:14,232 indy.libindy.native.indy.services.pool.pool INFO 	src/services/pool/pool.rs:737 | Drop started
2019-08-01 11:44:14,232 indy.libindy.native.indy.services.pool.pool INFO 	src/services/pool/pool.rs:745 | Drop wait worker
2019-08-01 11:44:14,233 indy.libindy.native.zmq DEBUG 	/Users/samuel/.cargo/registry/src/github.com-1ecc6299db9ec823/zmq-0.8.3/src/lib.rs:555 | socket dropped
2019-08-01 11:44:14,233 indy.libindy.native.indy.services.pool.pool INFO 	src/services/pool/pool.rs:748 | Drop finished
2019-08-01 11:44:14,234 indy.libindy.native.zmq DEBUG 	/Users/samuel/.cargo/registry/src/github.com-1ecc6299db9ec823/zmq-0.8.3/src/lib.rs:555 | socket dropped
2019-08-01 11:44:14,234 indy.libindy.native.zmq DEBUG 	/Users/samuel/.cargo/registry/src/github.com-1ecc6299db9ec823/zmq-0.8.3/src/lib.rs:462 | context dropped
2019-08-01 11:44:14,235 indy.libindy.native.indy.commands.pool DEBUG 	src/commands/pool.rs:202 | close <<<
2019-08-01 11:44:14,235 indy.libindy.native.indy.commands INFO 	src/commands/mod.rs:140 | PoolCommand command received
2019-08-01 11:44:14,235 indy.libindy.native.pool_command_executor INFO 	src/commands/pool.rs:98 | CloseAck command received
2019-08-01 11:44:14,235 indy.libindy DEBUG _indy_callback: >>> command_handle: 4, err , args: ()
2019-08-01 11:44:14,235 indy.libindy DEBUG _indy_callback: <<<
2019-08-01 11:44:14,235 indy.libindy DEBUG _indy_loop_callback: >>> command_handle: 4, err , args: ()
2019-08-01 11:44:14,235 indy.libindy DEBUG _indy_loop_callback: Function returned None
2019-08-01 11:44:14,235 indy.libindy DEBUG _indy_loop_callback <<<
2019-08-01 11:44:14,236 indy.pool DEBUG close_pool_ledger: <<< res: None


Open API Aries Faber Alice Demo

I am able to run the demo using docker playground up until the send credential step
On the Faber tab
The connection id is from the get_connection on the faber tab
The cred_definition is the CRED_DEF Transaction ID
The credential body

{
  "connection_id": "6c3e48e6-1d9a-4803-a20e-2117dd6ad842",
  "credential_definition_id": "6qnvgJtqwK44D8LFYnV5Yf:3:CL:9:tag",
  "credential_values": {"name": "Alice Smith", "date": "2018-05-28", "degree": "Maths", "age": "24"}
}

request url
http://ip10-0-55-4-bkjn6p15do8g14edjijg-8021.direct.play-with-von.vonx.io/credential_exchange/send

The response returns an error

500 Internal Server Error

Server got itself in trouble

I tried the steps a couple of times with the same results.
Any suggestions of what is wrong?

Issue with proof request in Alice/Faber demo

Currently, the Alice agent fails to locate any credentials, and submits a proof request that doesn't satisfy the requested predicates. This raises the following stack trace:

Faber      |     presentation_exchange: {'initiator': 'self', 'presentation_exchange_id': 'a50e9559-212d-450b-ab1d-15c592424bea', 'created_at': '2019-07-30 19:55:43.916985Z', 'thread_id': '66d4316a-50f3-48cd-868d-63fa8d0870c3', 'state': 'presentation_received', 'updated_at': '2019-07-30 19:55:44.542454Z', 'connection_id': 'ac7b9fd1-01a0-4d01-9e66-06c0fdcbce81', 'presentation_request': {'name': 'Proof of Education', 'version': '1.0', 'nonce': '335553867215326759012706530774249534678', 'requested_attributes': {'0c22daa1-662b-44bf-a2a9-859bd2de05bc': {'name': 'name', 'restrictions': [{'issuer_did': 'XL136n65jZBRBkPdvhcbQq'}]}, 'd7bf6d6b-5ca5-4cc4-b2f7-b0a61bd5260b': {'name': 'date', 'restrictions': [{'issuer_did': 'XL136n65jZBRBkPdvhcbQq'}]}, '6c668aed-6e8d-487a-b93b-9c248e9d3dba': {'name': 'degree', 'restrictions': [{'issuer_did': 'XL136n65jZBRBkPdvhcbQq'}]}, 'e21fdb65-8bec-4c58-823e-1d9950b809a4': {'name': 'self_attested_thing'}}, 'requested_predicates': {'b69652e8-63ba-4174-9f05-1494cfb83c07': {'name': 'age', 'p_type': '>=', 'p_value': 18}}}, 'presentation': {'proof': {'proofs': [], 'aggregated_proof': {'c_hash': '99712488383094640367300548209429701069832811062852723912148300184323887867262', 'c_list': []}}, 'requested_proof': {'revealed_attrs': {}, 'self_attested_attrs': {'d7bf6d6b-5ca5-4cc4-b2f7-b0a61bd5260b': 'my self-attested value', '0c22daa1-662b-44bf-a2a9-859bd2de05bc': 'my self-attested value', 'e21fdb65-8bec-4c58-823e-1d9950b809a4': 'my self-attested value', '6c668aed-6e8d-487a-b93b-9c248e9d3dba': 'my self-attested value'}, 'unrevealed_attrs': {}, 'predicates': {}}, 'identifiers': []}}
Faber      | 
Faber      | 2019-07-30 19:55:44,663 aiohttp.server ERROR Error handling request
Faber      | Traceback (most recent call last):
Faber      |   File "/home/indy/.pyenv/versions/3.6.8/lib/python3.6/site-packages/aiohttp/web_protocol.py", line 418, in start
Faber      |     resp = await task
Faber      |   File "/home/indy/.pyenv/versions/3.6.8/lib/python3.6/site-packages/aiohttp/web_app.py", line 458, in _handle
Faber      |     resp = await handler(request)
Faber      |   File "/home/indy/aries_cloudagent/messaging/presentations/routes.py", line 311, in presentation_exchange_verify_credential_presentation
Faber      |     presentation_exchange_record
Faber      |   File "/home/indy/aries_cloudagent/messaging/presentations/manager.py", line 276, in verify_presentation
Faber      |     presentation_request, presentation, schemas, credential_definitions
Faber      |   File "/home/indy/aries_cloudagent/verifier/indy.py", line 44, in verify_presentation
Faber      |     json.dumps({}),
Faber      |   File "/home/indy/.pyenv/versions/3.6.8/lib/python3.6/site-packages/indy/anoncreds.py", line 1390, in verifier_verify_proof
Faber      |     verifier_verify_proof.cb)
Faber      | indy.error.IndyError: (<ErrorCode.CommonInvalidStructure: 113>, {'backtrace': '', 'message': 'Error: Invalid structure\n  Caused by: Requested predicates {"b69652e8-63ba-4174-9f05-1494cfb83c07"} do not correspond to received {}\n'})

Aries OpenAPI Demo Issue

In Issuing a Credential section, the demo instructions asks to following

Next, scroll down to the POST /credential-definitions section that was executed in the previous step.

However, the document never asked to execute the POST/ credential-definitions. Kindly check .

Thanks.

Bug default_logging_config.ini in aca-py tag 0.2.1

When running aca-py (version in tag 0.2.1) from the command line it gets an error because it can't find the default logging config. This is because when installed via pip from the PyPI repo, the default logging config file at /,../aries_cloudagent/config/default_logging_config.ini' is not in the directory tree for the aca-py executable which is installed in /usr/local/bin/aca-py . This means there is no way to load the default config and unless the command line args supply a logging config it will always. fail. Running aca-py from a debugger from the repo will work because in this case aca-py is not in /usr/local/bin so the default init is in its local tree. Typically for a command line python script to work it would need to look for default config files in ~/.myconfigdir or the setup.py needs to copy the file into some other well known location relative to the packages in site-packages.

Suggest instead embedding the default config as a resource in a module instead of a file.

The error

 aca-py --inbound-transport http 0.0.0.0 8020 --outbound-transport http --log-level debug --endpoint 'http://localhost:8020'
Traceback (most recent call last):
  File "/usr/local/bin/aca-py", line 25, in <module>
    main()
  File "/usr/local/lib/python3.7/site-packages/aries_cloudagent/__init__.py", line 56, in main
    LoggingConfigurator.configure(log_config, log_level)
  File "/usr/local/lib/python3.7/site-packages/aries_cloudagent/config/logging.py", line 30, in configure
    fileConfig(config_path, disable_existing_loggers=False)
  File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/logging/config.py", line 71, in fileConfig
    formatters = _create_formatters(cp)
  File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/logging/config.py", line 104, in _create_formatters
    flist = cp["formatters"]["keys"]
  File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/configparser.py", line 958, in __getitem__
    raise KeyError(key)
KeyError: 'formatters'

The code containing the error is here

aclass LoggingConfigurator:
    """Utility class used to configure logging and print an informative start banner."""

    @classmethod
    def configure(cls, logging_config_path: str = None, log_level: str = None):
        """
        Configure logger.

        :param logging_config_path: str: (Default value = None) Optional path to
            custom logging config

        :param log_level: str: (Default value = None)
        """
        if logging_config_path is not None:
            config_path = logging_config_path
        else:
            config_path = path.join(
                path.dirname(path.abspath(__file__)), "default_logging_config.ini"
            )

        fileConfig(config_path, disable_existing_loggers=False)

        if log_level:
            log_level = log_level.upper()
            getLogger().setLevel(log_level)

To reproduce.
$ pip3 install aries-cloudagent
version 0.2.1

And call
$ aca-py
but without providing --log-config

Suggest adding a test pattern that tests aca-py in a real install not just in the repo.

Update the RTD docs-from-code process to produce more useful information for developers

We have functioning ReadTheDocs (RTD) from source code generating at https://aries-cloud-agent-python.readthedocs.io/en/latest/ (See the "Modules" link). It's working (yay!) but the generated result is not useful. Everything appears to be in the "aries_cloudagent" module, so you can't traverse the modules.

The current method for generating is here: https://github.com/swcurran/aries-cloudagent-python/blob/master/docs/README.md. Specifically, from the docs dir, run this: sphinx-apidoc -f -o . ..

The ideal would be that each folder below aries_cloudagent, and below aries_cloudagent/messages be a separate module that we could still auto-generate.

I'm not sure if the problem is that the items are not properly defined as "modules" in python (e.g. that it's a code problem) or if the script calling the generator needs to be setup to call each thing we want as module explcitly. And if the latter, how to we include the code files in the root folder (e.g. aries_cloudagent itself). For bonus points, how to structure the RST so that we can have the messages code (which are the message family protocols that are optionally added to an instance of Aries Agent) called out separtely from the rest of the code. I would think all but that bonus point should "just work", so I find the challenge quite weird.

von_anchor is an example of it working much better...https://von-anchor.readthedocs.io/en/latest/modules.html
generated from https://github.com/PSPC-SPAC-buyandsell/von_anchor/tree/master/docs/source

It may be that the rst files (e.g. https://github.com/PSPC-SPAC-buyandsell/von_anchor/blob/master/docs/source/von_anchor_utils.rst) are hand rolled, which is kind of OK - we'd have to figure out how to prevent people from adding new code and not docs.

Good luck - @WadeBarnes with help from myself and @nrempel

Really good to have this working in time for the transfer.

`aca-py start` not working?

Apologies if I'm missing something obvious, but it seems like aca-py start is not recognized?

Steps I took:

pip3 install aries-cloudagent

Trying to run aca-py was producing complaints about libsodium missing, so I did brew install libsodium and all seemed good...

If I run the example commands from the dev readme, however, I get unrecognized arguments: start.

$ aca-py start  --inbound-transport http 0.0.0.0 8000 --outbound-transport http
usage: aca-py [-h] -it <module> <host> <port> -ot <module>
              [--log-config <path-to-config>] [--log-level <log-level>]
              [-e <endpoint>] [-l <label>] [--seed <wallet-seed>]
              [--storage-type <storage-type>] [--wallet-key <wallet-key>]
              [--wallet-name <wallet-name>] [--wallet-type <wallet-type>]
              [--wallet-storage-type <storage-type>]
              [--wallet-storage-config <storage-config>]
              [--wallet-storage-creds <storage-creds>]
              [--pool-name <pool-name>]
              [--genesis-transactions <genesis-transactions>]
              [--genesis-url <genesis-url>] [--admin <host> <port>]
              [--admin-api-key <api-key>] [--admin-insecure-mode] [--debug]
              [--debug-seed <debug-did-seed>] [--debug-connections]
              [--accept-invites] [--accept-requests] [--auto-ping-connection]
              [--auto-respond-messages] [--auto-respond-credential-offer]
              [--auto-respond-presentation-request]
              [--auto-verify-presentation] [--no-receive-invites]
              [--help-link <help-url>] [--invite] [--timing]
              [--protocol <module>] [--webhook-url <url>]
aca-py: error: unrecognized arguments: start

Screenshot:

Screen Shot 2019-08-02 at 10 25 45 AM

System: macOS 10.14.6 Mojave

Consolidate and clean up the Alice/Faber demo

Things wanted:

  • get all the demo components in one directory - scripts, docker files, instructions.
  • call the script to run it "manage" to match the rest of our repos
  • have a stop and options at every webhook part for both Alice and Faber
  • pretty print the JSON dumps

REST API support for DID operations

In order to connect a running Agent to a non-von-network ledger, it would be useful to have some wallet/DID operations exposed through the API.

  • Create a new DID (from a random seed)
  • List existing DIDs and verkeys
  • Nominate a DID as the current public (ledger) DID

It would also be nice to return the active DID from the /status endpoint.

Unpacking and decorator check ordering

For context:

message_dict = None
message_json = message_body
try:
message_dict = json.loads(message_json)
except ValueError:
raise MessageParseError("Message JSON parsing failed")
if not isinstance(message_dict, dict):
raise MessageParseError("Message JSON result is not an object")
# parse thread ID
thread_dec = message_dict.get("~thread")
delivery.thread_id = (
thread_dec and thread_dec.get("thid") or message_dict.get("@id")
)
# handle transport decorator
transport_dec = message_dict.get("~transport")
if transport_dec:
delivery.direct_response_requested = transport_dec.get("return_route")
if "@type" not in message_dict:
try:
wallet: BaseWallet = await context.inject(BaseWallet)
except InjectorError:
raise MessageParseError("Wallet not defined in request context")
try:
unpacked = await wallet.unpack_message(message_body)
message_json, delivery.sender_verkey, delivery.recipient_verkey = (
unpacked
)
except WalletError:
LOGGER.debug("Message unpack failed, falling back to JSON")
else:
delivery.raw_message = message_json
try:
message_dict = json.loads(message_json)
except ValueError:
raise MessageParseError("Message JSON parsing failed")

The order of decorator checks and unpacking seems off; it first deserializes which would result in a dictionary with 'tag' and 'iv' and 'ciphertext' etc for a packed message but then checks for '~thread' and '~transport' in that dictionary before unpacking the message.

Shouldn't it be unpacking the message before checking for decorators?

ptvsd never return.

I try to run the local daemon with ./script/run_docker.
The script is stuck at === Waiting for debugger to attach ===.

I doubt it is expected. Just want to confirm this. Or I missed anything.
I have installed the ptvsd module.

Demo bug discovered

When I was running through the demo using the ./run_demo scripts I encountered a bug.

Steps to reproduce it:

  1. start the VON network.
  2. start Faber agent using ./run_demo faber
  3. start Alice agent using ./run_demo alice
  4. Send invitation details from Alice agent to Faber Agent
  5. close Alice agent
  6. start Alice agent again using ./run_demo alice
  7. Resend same invitation details from Alice agent to Faber Agent

On Faber agent the following stack trace is outputted:

Faber      | Connected
Faber      | 2019-07-08 20:07:09,766 aries_cloudagent.messaging.base_handler ERROR Error receiving connection request
Faber      | Traceback (most recent call last):
Faber      |   File "/home/indy/aries_cloudagent/messaging/connections/manager.py", line 301, in receive_request
Faber      |     self.context, connection_key, ConnectionRecord.INITIATOR_SELF
Faber      |   File "/home/indy/aries_cloudagent/messaging/connections/models/connection_record.py", line 247, in retrieve_by_invitation_key
Faber      |     return await cls.retrieve_by_tag_filter(context, tag_filter)
Faber      |   File "/home/indy/aries_cloudagent/messaging/connections/models/connection_record.py", line 202, in retrieve_by_tag_filter
Faber      |     cls.RECORD_TYPE, tag_filter
Faber      |   File "/home/indy/aries_cloudagent/storage/base.py", line 217, in fetch_single
Faber      |     raise StorageNotFoundError("Record not found")
Faber      | aries_cloudagent.storage.error.StorageNotFoundError: Record not found
Faber      | 
Faber      | During handling of the above exception, another exception occurred:
Faber      | 
Faber      | Traceback (most recent call last):
Faber      |   File "/home/indy/aries_cloudagent/messaging/connections/handlers/connection_request_handler.py", line 28, in handle
Faber      |     context.message, context.message_delivery
Faber      |   File "/home/indy/aries_cloudagent/messaging/connections/manager.py", line 305, in receive_request
Faber      |     "No invitation found for pairwise connection"
Faber      | aries_cloudagent.messaging.connections.manager.ConnectionManagerError: No invitation found for pairwise connection

On Alice agent the following data is provided:
{
"connection_id": "f88e976e-83a9-49fa-af9a-9c2c9116ca11",
"my_did": "HB7TcMFbS4iq3AEEykqe3a",
"their_label": "Faber Agent",
"initiator": "external",
"invitation_key": "CBMhDiuBDPnbEs7LPFwpbvbSasS894cgj2KYCUVSLqUT",
"request_id": "3903c194-4f17-46f8-b961-239f95843cab",
"state": "request",
"routing_state": "none",
"created_at": "2019-07-08 20:07:09.648451Z",
"updated_at": "2019-07-08 20:07:09.705012Z"
}

Then when trying to send a message (3) from Faber agent, the following message is presented:

Alice      | 2019-07-08 20:11:29,752 aries_cloudagent.dispatcher ERROR Message parsing failed: Message does not contain '@type' parameter, sending problem report

Running the demo on a raspberry pi - agent not starting

A user reporting running ACA-Py demo on a raspberry pi. The user has the demo running on a regular system, but it's failing to start up on the Pi, with the following details:

First I started a von-network on my PC (raspberry pi was in the same network) following the guide here: https://github.com/bcgov/von-network#running-the-network-locally

Then on pi in Aries-cloud-agent directory I execute:

python3.6 -m venv env
source env/bin/activate
pip3 install wheel
pip3 install -e .
pip3 install prompt_toolkit pygments python3-indy
cd demo
LEDGER_URL=http://192.168.31.186:9000 python3.6 -m runners.faber -p 8020

On the Pi I run:

LEDGER_URL=http://192.168.31.186:9000 python3.6 -m runners.faber -p 8020

Resulting Log:

#1 Provision an agent and wallet, get back configuration details
Faber | Registering Faber Agent with seed d_000000000000000000000000459545
Faber | Got DID: UyUZWxhdbonVygFjy1ZLWy
Faber |
Faber | Shutting down
Faber | Exited with return code 0
Traceback (most recent call last):
File "/usr/local/lib/python3.6/runpy.py", line 193, in runmodule_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.6/runpy.py", line 85, in runcode
exec(code, run_globals)
File "/home/pi/aries-cloudagent-python/demo/runners/faber.py", line 248, in <module>
asyncio.get_event_loop().run_until_complete(main(args.port, args.timing))
File "/usr/local/lib/python3.6/asyncio/base_events.py", line 468, in run_until_complete
return future.result()
File "/home/pi/aries-cloudagent-python/demo/runners/faber.py", line 117, in main
await agent.start_process()
File "/home/pi/aries-cloudagent-python/demo/runners/support/agent.py", line 286, in start_process
await self.detect_process()
File "/home/pi/aries-cloudagent-python/demo/runners/support/agent.py", line 373, in detect_process
raise Exception(f"Timed out waiting for agent process to start")
Exception: Timed out waiting for agent process to start

Review and define tasks to implement the did:peer method in aries-cloudagent-python

Please review the most recent updates to the did:peer method spec - https://openssi.github.io/peer-did-method-spec/

As you review it, please identify tasks we can add to this product now. For example:

  • naming DIDs with the "did:peer:" prefix. Likely that would continue to be interoperable with any other agent.
  • Generate the DID itself using the algorithm defined in the spec. Again - likely to be interoperable with any other agent.
  • Add the new protocols to add the Update and Delete operations for did:peer DIDs, including making it optional to notify the other party of a delete, or to just silently delete the connection. Likely to be eventually interoperable with other agents - once they support the protocols.

Likely there are additional impacts for which we need to account for in the agent.

Feature: Add aca-py command line option --genesis-file

The current aca-py has two methods for injecting the genesis transactions.

  1. --genesis-transactions
  2. --genesis-url

The first is not a great user experience. The genesis transactions are a newline separated list of JSON data structures; not event a JSON list. This requires escaping to get the transactions to correctly input from a command line shell. A minimal four node set of genesis transactions are still almost 3K Characters.So this is not a reasonable option.

The second option genesis url requires some web server that provides the genesis transactions. This is a reasonable approach from a user experience but does have a hard dependency on a web server.

A practical alternative that has both decent user experience and no external dependencies is to allow a local file path input to provide the genesis transactions.

Updating and documenting running demos with plain python

Current documentation references running a script that doesn't exist and doesn't account for getting the python requirements installed. Trying to run it how we thought would work didn't - things were in folders that are not accessible. Without breaking the docker stuff (or adjusting that as needed), get the "running python" approach working and documented.

Current document: https://github.com/hyperledger/aries-cloudagent-python/blob/master/demo/README.md#running-locally

Installing prerequisites sudo pip3 install -r demo/requirements.txt
Running locally, but not yet working (run from root folder): GENESIS_URL=http://localhost:9000/genesis python -m demo.faber 8020

Rename or recreate the repo

Please at least rename this repo to "aries-cloudagent-python" in preparation for transferring it to Hyperledger. Alternatively create a new repo by that name. Either way, please make sure that the standard BC Gov files are put into the repo - licence, contributions, etc. All repos should always have that, and Shea mentioned there might be a tool for doing that.

@nrempel - should we just create a new repo, or use this one? Remember that we want the commit history on the files.

Thanks!

Evaluate new Sovrin TAA feature and how to handle that in ACA-Py

The new Transaction Author Agreement (TAA) will soon be added to the Sovrin ledger (MainNet, BuilderNet and StagingNet) and support will need to be added to ACA-Py to support it for Aries instances accessing and writing to those networks.

Please review this new feature and evaluate how we can add support for the TAA in ACA-Py. This is a good starting article, and there links at the bottom of the article to further technical details.

https://sovrin.org/preparing-for-the-sovrin-transaction-author-agreement/

Create Model abstraction

roll up model storage pattern into new abstraction

  • sends webhooks
  • ephemeral option (only send webhook)

Add web-based Alice/Faber demo to ACA-Py

In the Blockchain for Business edX course, we used the work done by the BYU team to create a web-based, node.js Alice/Faber workshop that has been very well received. It gives a good, visual way for those new to the Indy/Verifiable Credentials world to see how credentials can work. The problem with the workshop is that it is based on really old code and old concepts. At best the workshop is misleading (particularly about public DIDs) and and worst, those using the workshop start to dig into and in some cases try to extend the code. It would be much better to add a new workshop based on ACA-Py.

The following are a suggestion of the features of the new demo:

  • audience: primarily for business or as a first time run through for developers
    • secondary audience: developers trying to dig into the controller code to build their own
  • ensure that the demo be run in Play with Docker, with docker-compose to execute
  • implement separate controller and agents for each of the participants
  • improve the user interface, but it's OK to show some ugliness (e.g. JSON data) if it helps developers running the demo understand the flow
  • implement Alice as a web service that looks human useful
  • consider implementing Faber and Acme as backend services - no user interface (at least initially)
    • Cutting and pasting invitations may be used since QR codes are not as feasible
  • use node-js for the controllers to demonstrate that a controller can be written in any language
    • perhaps even do one of the controllers in another tech, such as dotnet
  • If possible, use the email verification service, repo
    • This is just an idea - not sure it is feasible.
    • We can could the service if that's appropriate - eg separate page for entering email address and separate page when clicking from the email to that shows an invitation.
  • Alternatively, consider having a non-UI Government issuing services that issues "Registered Organization" credentials to Faber and Acme and a "Citizen" credential to Alice
  • Add to the story an automatic proof request from Alice requesting proof of registration for Faber and Acme.

Proposed flow:

  • Use email verification service to get a verifiable credential for the users email address.
    • Alternative: Gov Agent issues Alice, Faber and Acme appropriate baseline credentials
  • Connect to Faber and Faber requests email address/gov proof
    • Nice to have: Alice requests Gov proof (or could be exercise for the user to implement)
  • Faber generates a transcript that includes data received in the prtoof
  • Connect to Acme - requests ID and transcripts proof

Connection invitation handling improvements

Some updates to improve the granularity of the 'auto-accept' feature for invitations and reduce the number of configuration options.

  • Remove the --accept-invites and --accept-requests command line parameters

  • Add an auto_accept property to connection records. When the flag is set on a given connection record and corresponding connection request is later received, automatically send a connection response.

  • Add an auto_accept parameter to the connections_create_invitation route. This simply sets the flag on the new connection record.

  • Add an auto_accept parameter to the connections_receive_invitation route. If the parameter is set, automatically send a new connection request based on the invitation.

  • Update demos (including IIWBook and email verification) to pass these parameters instead of setting the command line flags.

  • Update documentation if necessary.

@swcurran @nrempel Does that sound reasonable?

incompatible shell

I try to run through the docs. when to run run_tests, I got the error:

./scripts/run_tests: [[: not found

As I found the in the script that this shell script just uses sh not bash. And [[ ... ]] is not good. I would advise to the syntax consistent.

My os is: ubuntu 18.04, bash and amd64. Is this anything which should be fixed?

Automate creation of releases

We should create a way to automate new releases.

Ideally, the process is triggered from the creation of a github release. This would build and push a new release to PyPI.

It would also be nice if we could automate the updating of the changelog

AriesOpenAPIDemo not working with von-network

The OpenAPI demo is failing for me. I'm up to date on aries-cloudagent-python and von-network. Also just updated Docker. Using Docker desktop 2.1.0.0 on Mac 10.14.6 Mojave.

I found exactly where it's failing, and got it working by making basically one change (see below), but I don't how to solve the problem properly so not confident creating a PR.

Steps taken:

  1. Clone the aries-cloudagent-python repo and the von-network repo.
  2. cd von-network, ./manage build, ./manage start
  3. cd aries-cloudagent-python/demo, ./run_demo faber

The faber agent waits for a long time, then fails:

Screen Shot 2019-07-31 at 4 36 56 PM

I tracked the problem down to line 27/28 in demo/runners/support/agent.py. These are the lines:

    DEFAULT_INTERNAL_HOST = os.getenv("DOCKERHOST") or "host.docker.internal"
    DEFAULT_EXTERNAL_HOST = DEFAULT_INTERNAL_HOST

When this code runs DOCKERHOST is 192.168.65.3. When DEFAULT_INTERNAL_HOST has this value it fails grabbing the swagger json in faber.py. Everything works perfectly if I change these lines to:

    DEFAULT_EXTERNAL_HOST = os.getenv("DOCKERHOST") or "host.docker.internal"
    DEFAULT_INTERNAL_HOST = 'localhost'

So DEFAULT_EXTERNAL_HOST should be 192.168.65.3 for me, but DEFAULT_INTERNAL_HOST must be localhost or 127.0.0.1 or 0.0.0.0.

Breaking Bug: hard coded wallet seed in aca-py can't use a different public did

When using aca-py (but with working around this fatal error #120). I try to run aca-py with a different --seed (wallet seed) than that used by the faber.py agent.

I first go to the von-network web browser and register a new did with a seed. This seed.

0123456789ABCDEF0123456789ABCDEF

dentity successfully registered:

Seed: 0123456789ABCDEF0123456789ABCDEF
DID: 3avoBCqDMFHFaKUHug9s8W
Verkey: 2QiWG18JjfjUFQMk8xdmhyphRzmbveaYbGM3R8iPbiBx

I then provide this seed to aca-py
I then run aca-py with the following command line options:

$ aca-py --inbound-transport http 0.0.0.0 8020 --outbound-transport http --log-level debug --endpoint http://localhost:8020 --label FaberAgent  --seed 0123456789ABCDEF0123456789ABCDEF --pool-name localindypool --admin 0.0.0.0 8021 --admin-insecure-mode --accept-invites --accept-requests --auto-ping-connection --auto-respond-messages --auto-respond-credential-offer --auto-respond-presentation-request --auto-verify-presentation --wallet-key faber_agent_186191 --wallet-name faber_wallet_186191 --wallet-type indy --genesis-url http://localhost:9000/genesis

The following error occurs

2019-08-03 19:50:52,931 asyncio ERROR Task exception was never retrieved
future: <Task finished coro=<start_app() done, defined at /Data/Code/public/hyperledger/aries/cloudagentpy/aries_cloudagent/__init__.py:27> exception=StartupError("New seed provided which doesn't match the registered public did VYb4UdJiKPVAG76YDiHLPb")>
Traceback (most recent call last):
  File "/Data/Code/public/hyperledger/aries/cloudagentpy/aries_cloudagent/__init__.py", line 30, in start_app
    await conductor.start()
  File "/Data/Code/public/hyperledger/aries/cloudagentpy/aries_cloudagent/conductor.py", line 163, in start
    + f" public did {public_did_info.did}"
aries_cloudagent.error.StartupError: New seed provided which doesn't match the registered public did VYb4UdJiKPVAG76YDiHLPb

The source of the error is the following code

class Conductor:
    """
    Conductor class.

    Class responsible for initializing concrete implementations
    of our require interfaces and routing inbound and outbound message data.
    """
....
    async def start(self) -> None:
        """Start the agent."""

        context = self.context
        # Initialize wallet
        wallet: BaseWallet = await context.inject(BaseWallet)
        wallet_seed = context.settings.get("wallet.seed")
        public_did_info = await wallet.get_public_did()
        public_did = None
        if public_did_info:
            public_did = public_did_info.did
            # If we already have a registered public did and it doesn't match
            # the one derived from `wallet_seed` then we error out.
            # TODO: Add a command to change public did explicitly
            if wallet_seed and seed_to_did(wallet_seed) != public_did_info.did:
                raise StartupError(
                    "New seed provided which doesn't match the registered"
                    + f" public did {public_did_info.did}"
                )
        elif wallet_seed:
            public_did_info = await wallet.create_public_d

Notable is the comment #TODO: Add a combed to change public did explicitly.

This appears to be an oversight that prevents aca-py from being used for anything but a fixed public did for any given wallet.

If I change the wallet name and wallet key then it works

$ aca-py --inbound-transport http 0.0.0.0 8020 --outbound-transport http --log-level debug --endpoint http://localhost:8020 --label FaberAgent  --seed 0123456789ABCDEF0123456789ABCDEF --pool-name localindypool --admin 0.0.0.0 8021 --admin-insecure-mode --accept-invites --accept-requests --auto-ping-connection --auto-respond-messages --auto-respond-credential-offer --auto-respond-presentation-request --auto-verify-presentation --wallet-key super_agent-1 --wallet-name super_agent_1 --wallet-type indy --genesis-url http://localhost:9000/genesis

Suggest add tests that test sane configurations of aca-py that are not the pre baked demo ones. Otherwise these sorts of bugs will go undetected.

Suggest add issue to fix the TODO above.

Running Alice Faber Demo locally on macOS

I am following the instructions here
https://github.com/hyperledger/aries-cloudagent-python/tree/master/demo#Running-Locally

The place where I get stuck is here:

"
Alternately, you can run the ledger using von-network mechanism, or some other instance of the ledger. In those cases, you must provide the agents access to the ledger genesis file, and you must ensure that the agents have write access on that ledger.
"
Because I was able to get a von-network ledger in docker running fine I want to run the ledger that way. But its not clear how I provide agents access to the ledger genesis file in this case.??

I am assuming its not cloudagentpy/demo/local-genesis.txt which the agents already have access too.

When I cd to my clone of the von-network repo and run
$ ./manage start

The first message the console prints out is

WARNING: The GENESIS_URL variable is not set. Defaulting to a blank string.
WARNING: The ANONYMOUS variable is not set. Defaulting to a blank string.
WARNING: The LEDGER_SEED variable is not set. Defaulting to a blank string.
WARNING: The LEDGER_CACHE_PATH variable is not set. Defaulting to a blank string.
WARNING: The WEB_ANALYTICS_SCRIPT variable is not set. Defaulting to a blank string.
WARNING: The INFO_SITE_TEXT variable is not set. Defaulting to a blank string.
WARNING: The INFO_SITE_URL variable is not set. Defaulting to a blank string.

These warnings do not prevent the von-network ledger from running correctly and the docker versions of alice and faber apparently are able to use the same genesis file.

But when running alice and faber locally its not clear how to have both the von-network and the agents use the same genesis file.

Indeed what gets stored where is not documented. It would be helpful to have some description of what is being persisted on disk and where.

Make webhook interface more robust

Webhooks be a consistent format. Right now we simply serialize and send an internal state object.

We should design a consistent patter and also include related metadata such as thread information.

Remove references to "host.docker.internal" in storage/wallet tests

In the storage and wallet tests, the docker host reference is hard-coded to "host.docker.internal", which doesn't work on Linux. Please change the reference to an environment variable (likely DOCKERHOST) and then trace back to how the tests are run to make sure they are invoked properly.

Let me know how to run the tests before so I can try them, and then after to verify the fix on Linux.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.