lightningdevkit / lightning-liquidity Goto Github PK
View Code? Open in Web Editor NEWLicense: Other
License: Other
Currently a lsps2 message handler is only set up if a lsps2 config is provided. This config is only used (currently) for an LSP and so it doesn't make sense to require the wallet user to set one up just to request an invoice.
Hi there,
First, sorry to open PR(#15, #16) before this issue, and thanks for this crate, such that we can have a channel to talk to each other. I will list down the details here before the bi-week meeting starts just to make sure I will not miss out on some detail and do not mean on this (I live in the +8 timezone, and the time of the meeting is midnight), and you do not need to hurry to respond this if you are busy.
Real Use Case
An application uses
LiquidityManager
, which hasLSPS0MessageHandler
in it.
Nowlsps0.listprotocols
is implemented inside, it is good to request the application(release build) and make sure it works to validate the application before delivery.
So anything under #[cfg(test)]
or unit test will not solve, because every byte in #[cfg(test)]
or unit test is not really in the release build binary. I do integrate and do not have the duty of developing this repo, contributing to test cases, or any kind of unit test that does not make sense from my end and does not solve my issue on validating the release binary.
The Way to Receive Response in Blockchain
This is the part I really confuse in this project. In General, the response of blockchain sends out in two ways, it is hard to know how it will be in the current project. Different blockchains may use different names, and the basic mechanism are these.
Blockchain Node <-----> Queryer (possibly a Node, client, or cli, etc whatever)
For the direct response, the function actually handles the content of the request and will return the content of the response.
Blockchain Node <-----> Blockchain Node (Queryer, also a blockchain node to receive Event)
|
\-----------> Blockchain Node
\-----------> Blockchain Node
\-----------> Blockchain Node
....
In this way, the response or event will send to every node. There are three ways a query can know the exact response to his request.
The content is a status or a general thing (No additional Id)
take the lsps0.listprotocols
as an example
If there is a field to represent the node, then we can use this, the current spec of lsps0 is not
{
"node_id": "0xAAAAA",
"protocols": [0]
}
With request Id or receipt Id
There is an id
in the response, and commonly any other things are in a result field like this(nesting is not necessary). Because of the id
, the queryer can listen the events and know which one is the response to the request he sent.
{
"id": "let the query know this is for him",
"result": {
"protocols": [0]
}
}
And there are two kind Id of this based on the mechanism wherein who generate the id
Currently Status and Issues
Result<(), _>
.LiquidityManager
to make a request to other node or self nodeTerminology
The application with LSPS will handle any incoming LSPS message and also send LSPS messages out, so the term Client
or Server
I try to avoid. It mixes two kinds of roles. And there is still a difference between an application listening to the event and not. So the term Node
means it is in the blockchain network and will listening events or broadcast events.
CreateOrderRequest
currently holds a version
and also an api_version
field as part of the Order
struct. It's unclear where the version
field is coming from. It probably should be removed.
We should probably generate and track the PaymentHash and PaymentSecret that the wallet user uses in the invoice. This enables the library to help with fee verification described in #61 .
We will need to set min_value_msat
, invoice_expiry_delta_secs
, and min_final_cltv_expiry_delta
when creating the inbound payment for the user. I'm not sure if there are sensible defaults we should use? Could expose all three in the configuration for the wallet user.
Currently, when we receive an incoming payment we surface an OpenChannel
request to the user.
However, we don't really handle the case where the user doesn't come back online.
We should at least make sure that we'd eventually timeout stale requests or give corresponding guidance to the user how to handle OpenChannel
when the peer is offline.
There is this concept of valid_until
in the opening fee params that specifies a datetime when this particular jit channel request becomes invalid.
We currently are not expiring requests. We are only checking the validity at the time the buy request is made. At a minimum we need:
In #56 we concluded the forwarded amount need to be lower-bounded by htlc_minimum_msat
/ChannelDetails::next_outbound_htlc_minimum
as currently the fee calculation could result in some 0-amount HTLCs under certain circumstances. We then also should check that in the corresponding test.
It is nice when this commit comes in, the LiquidityManager
now dependents on high-level traits and dependent on a lot of low-level things. However, we ran into a problem after this.
The LiquidityManager
now will need APeerManager
as a dependency in trait. And PeerManager
already implemented APeerManager
, but PeerManager also needs CustomMessageHandler
. When using the LiquidityManager
as the CustomMessageHandler
. We will run into a cyclic trait dependency, and that is not allowed in Rust.
PeerManager
<- LiquidityManager
<- CustomMessageHandler
<- PeerManager
Do you think there is a good suggestion to use this crate? Many thanks for considering our use case.
The client specifies the optional payment_size_msat
at the start of the flow but we don't learn about the min_payment_size_msat
and max_payment_size_msat
the LSP supports until we receive the GetInfoResponse. We currently do not check if the provided payment size is within these bounds.
I'm not sure the best way to handle it though. I guess instead of surfacing the GetInfoResponse
event to the end-user we would surface some kind of error event?
Thoughts?
Once we receive ChannelReady event from LDK we iterate over all of the intercepted HTLCs we are holding, remove the fee, and then attempt to forward each one individually.
It's possible for forwards to fail but we will not receive any kind of signal that this happened. As an LSP we would want to know how much fee is still owed for the channel we just opened. We could then continue to take fee from future HTLCs until we have collected what we were owed.
We might also want to close the channel should the opening payment fail for whatever reason.
@tnull was going to look into how this could work with LDK as it doesn't appear to be possible to extract this information from LDK as it stands. We might be able to use HTLCHandlingFailed
event in combination with next hop and payment_hash information. This still wouldn't help us identify which parts of a MPP failed, we would need a way to tie the InterceptId to the failed HTLC event.
There are a few instances where some fields are optional and/or may not be present and we currently seem to lean on serde
ignoring unrecognized fields. We should try to make these fields explicitly optional if possible. Some examples would OptionsSupported
or OrderPayment::onchain_payment
, however, I expect the latter case has to be made optional anyways.
E.g., the spec says: "The LSP SHOULD provide a valid_until time that is at least 10 minutes from the current time."
We should check the user input adheres to this and similar constraints and error out otherwise.
Currently, the LSPS0/LSPS1/LSPS2 message handlers implement both 'client-side' and 'server-side' behavior. The corresponding event types therefore also hold all events. This is particularly painful for 'client-side' users that should not to be bothered by most of these events and the corresponding API complexity.
We should seriously consider splitting this up now that our LiquidityManager
won't be the central interface for everything but will only allow to access the underlying LSPSX message handlers. Since we modularized the interface in #58, we could now also consider to split LSPS1 and LSPS2 parts into client and server-side objects that can all be held by the LiquidityManager
.
We should move RawOpeningFeeParameters::into_opening_fee_params
to a util next to (and DRYing up) is_valid_opening_fee
.
LSPS1's OrderPayment
gives the bolt11_invoice
field as a String
. We probably should de/serialize this directly from/to Bolt11Invoice
to make users' lives a bit easier.
Same goes for the onchain address types, outpoint, and some of the datetimes which are still String
s.
In #58 we added most of the docs missing from LSPS1.
However, we also left some TODO
s, in particular in lsps1/event.rs
. Moreover, we for now hid all pub fn
from LSPS1MessageHandler
for them to be exposed-and-documented as needed at a later date.
We should see to add the missing docs before removing the cfg
guard.
In the case of MPP, if we intercept an HTLC but still do not have enough collected to cover the payment and fee we need to keep waiting for more parts. Currently the flow just silently collects the received HTLC but it would be nice to at least log that we received another part.
We decided to punt on logging for the moment because how it works will depend on how the restructuring resolves.
After #58 we'll have LiquidityClientConfig
and LiquidityProviderConfig
to allow enabling/configuring client/server-side functionalities for LiquidityManager
. However, it might be a bit nicer to a) enable client-side functionality by default and b) allow to enable server-side functionality, possibly via a builder pattern?
The spec mentions the client must validate the promise is not longer than 512 bytes to prevent LSP from burdening them with a lot of data. We currently do not do any validation around the promise on the client side.
As discussed out-of-band, we'll move to a crate::lspsX
module format afterall.
We should hence restructure the project accordingly once #20 (and possibly follow-ups) lands.
As discussed out of band it would be very useful if we could further modularize and expose the codebase at each of the individual levels.
For example, we want to allow third parties to reuse our message types/serialization, or, just a specific LSPS implementation, without the need to use the full LiquidityManager
and even without relying on LDK-specific *Manager
types. To this end we'll need to make sure that not too many LDK types (in particular ChannelManager
/PeerManager
leak into the 'inner' part of the LSPS implementations.
A wallet user should be verifying that the counterparty_skimmed_fee_msat
is the agreed upon amount before claiming a payment from the LSP. This feels like something the library should be able to help them do. Something similar to htlc_intercepted where the end-user can pass the details of a PaymentClaimable event to the library and we respond with whether or not we think they should claim it.
We can't do this today because we don't generate the Invoice (or at a minimum the PaymentHash) that they should use. Maybe a simpler step to take is to use the channel manager and add a PaymentHash / PaymentSecret they should use in the invoice to InvoiceGenerationReady.
Depends on #60
LSPS1 currently uses id
or channel_id
variable names in different places.
For consistency with LSPS2 and LDK in general, we should make sure to name it user_channel_id
everywhere.
The spec was slightly changed, we need to address these:
As discussed out-of-band and in #1, we'll rename the crate lightning-liquidity
to a) give it a name that doesn't indicate it just hosts the client side of things and b) prepare for eventually upstreaming it to rust-lightning
by already adopting a name in accordance with the naming scheme used there.
The spec mentions the ability for an LSP to add additional information into the promise. It also talks about the intentional vagueness in how this should work. Perhaps we want to design this in a way similar to LDK's KeysManager where we provide a reasonable default implementation but allow for flexibility should the LSP need it.
This feels really low priority at the moment but just creating issues for things in the spec we are currently missing.
If the scid to peer map is empty for an scid we currently just do nothing and return Ok((). This means the intercepted HTLC will never be failed (or forwarded) until LDK detects it's close to expiry. The library should fail it for the user here.
If the peer is found but there's no peer_state then we return Err to the user but do not automatically fail it for them.
Finally, if we do have peer state but there's no outbound jit channel found, we will end up returning Ok(() and the HTLC will remain stuck yet again.
Need to properly fail and error all of these cases.
There is no fundamental reason why this crate should always require std
. We should at least try to add no_std
support.
As BitcoinAndLightningLayerSpecs/lsp#57 pointed out, an LSP might not always want to publish LSPS support in the node announcement. We should make this configurable via a simple flag.
In #86 we denied warnings in CI generally, but allowed them explicitly for LSPS1 as the implementation is not finished and currently hidden behind a cfg
flag anyways.
We should however make sure to disallow warnings again, fix any remaining ones before we drop the cfg
flag.
In #53 we made the crate generally no-std
compatible, with one small exception: we now don't have an immediate way to verify that OpeningFeeParams
we receive as an LSP are still valid. Currently, we just don't check expiry in no-std
builds, however, we should find a way to do this in the future, e.g., by letting the user giving us an implementation of a trait TimeSource
or similar.
Currently, we would log and fail if a buy requests fails, e.g., due to unsupported client_trust_lsp
setting or otherwise.
We should surface this cleverly to the user, e.g. through one or more events indicating that and why a buy request failed.
After integration with lightning-liquidity
, I believe the user will want a simple LSPS client to check the integration is well. The clients are public, but they can not really in used by any user because they are sealed.
For example, we have a public struct LSPS2ClientHandler
, but the fields inside the struct are private and the new function is private (only public in the crate).
That is, the EventQueue
is private but it is required by LSPS2ClientHandler
.
For now I created the following submodules in the crate:
transport
for LSPS0channel_request
for LSPS1jit_channel
for LSPS3 (possibly LSPS2 soon)I however wonder if we should follow the namespacing introduced in the LSPSs and structure the crate along the lines of
crate::lsps0::..
etc.
A question arising from this would then how to deal with datatypes shared between LSPSs...
The LSP's response to a GetOrderRequest
is the same object as to CreateOrder
. I'm a bit dubious why we require it in a separate GetOrderResponse
object.
Currently this crate surfaces a user_channel_id to the user in OpenChannel event that they must use as the user_channel_id when they create the channel. Currently, we are passing the SCID as the user_channel_id. This is because there's no way for the crate to map the channel during ChannelReady event to the scid used.
Not only is it a bit strange for our crate to specify what an ldk user's user_channel_id must be but it makes it difficult for them to track a entire flow from buy request to payment forwarded.
I think a better API would be one where the end-user can provide a user_channel_id to invoice_parameters_generated
and our crate will resurface that during OpenChannel and keep a mapping from the user_channel_id => scid so that when ChannelReady event comes it can figure out the scid that was used.
It seems we'll go ahead with the NO VERSIONING proposal (BitcoinAndLightningLayerSpecs/lsp#63), which means we need to drop versioning support in our LSPS2 implementation (as soon as it's actually removed from the spec).
JITChannelsConfig
should be renamed to match InboundJITChannelConfig
, i.e., OutboundJITChannelConfig
or similar replacing JITChannel
with LSPS2
.
During project restructuring, it probably should also get moved to the lsps2
module.
Currently, we only have some test coverage message serialization/deserialization.
We need to immensly increase this coverage and should try to check we adhere to every MUST
condition mentioned in the spec and that non-adherence results in an error.
We want to explore using proptest
to give us better test coverage of the many parameters the LSP (Client) could configure.
Good first candidates to benefit from it would be our testing of calculate_amount_to_forward
and compute_opening_fee
.
The enqueue
function of the public trait MessageQueue is immutable. In general, we do mutate some data from the Queue, when we put things into it.
As of LSPS1, the OrderState
should only have three states: CREATED
/COMPLETED
/FAILED
. It seems Requested
is superfluous.
Currently we don't enforce any ordering and just return whatever the LSP provides. The spec says these need to be ordered by fee rates. Perhaps this means implementing Ord/PartialOrd and then using some built-in sorting alg?
From the spec:
The LSP, when ordering the opening_fee_params_menu array, MUST order by the following rules:
The 0th item MAY have any parameters.
Each succeeding item MUST, compared to the previous item, obey any one of the following:
- Have a larger min_fee_msat, and equal proportional.
- Have a larger proportional, and equal min_fee_msat.
- Have a larger min_fee_msat, AND larger proportional.
Rationale This simplifies how the client expresses the expected opening fee to the user, as it assures the client that the order of the fee rates is strictly increasing.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.