Git Product home page Git Product logo

Comments (44)

synctext avatar synctext commented on May 12, 2024 2

Database support for our axiomatic principles:

  • past actions are known, recorded tamper-proof, shared, and widely available.
  • the present is independent, unwritten, and uncertain

For 2023 I hope our lab is experienced enough to build a new (self-organising) distributed database paradigm @kozlovsky. Something on top of Pony, integral reputation/trust tracking, something like Trustchain for each content insert, everybody runs a "full nodes", and uncompromising permissionless. Like the perfect blend of Bitcoin with ORM. The reputation function of content is used to determine if information is actively suppressed or actively spread Trustworthy_Gossip(suppress,delete,store,collect,spread), see trustworthy gossip.
An inspirational example appeared online this week. Interesting example of what people are trying. Good to identify the weakness, as this will never scale beyond a demo. It can not handle spam, fraud and fake identities. Fake identities are huge, people seem to not always grasp the scale of these attacks. More fake people sign up for Facebook each year, then the amount of real humans!

from tribler.

qstokkink avatar qstokkink commented on May 12, 2024 2

To avoid making the same mistakes twice, two short post-mortems of our previous work:

  • Dispersy store_update_forward principle.
    Status: retired.
    Reason: by integrating the "store, update, forward" principle into all packets, packet handling in Dispersy is 20 times slower at handling packets than IPv8, which does not integrate this principle into its network layer. Secondarily, it became nigh-impossible to escape the principle and make optimized synchronization primitives (e.g., TTL-based packets for MultiChain).
    Lesson: store packets sparingly and purposefully as a result of overlay logic, i.e., do not integrate generic database synchronization into the network layer.
  • IPv8's store, update, forward, vote, purge, delete paradigm.
    Status: never used.
    Reason: so far, existing communities have never fit very well to this principle, requiring boilerplate code. More appropriate (i.e. smaller and faster) ways to synchronize information are usually available (e.g., Trustchain forwarding, GigaChannel synchronization and VSIDS-based channel votes).
    Lesson: a general approach for gossiping (thus far) has never been as useful for Tribler in comparison with approaches tailored to a specific use case.

from tribler.

qstokkink avatar qstokkink commented on May 12, 2024 1

What are your thoughts on making reputation tradable?

Given my current belief system, I would frame this as follows:

  1. You account a transfer of currency for bandwidth tokens.
  2. This is interpreted a useful work by others (how much currency equates to how much bandwidth should be determined somehow -- this is outside of my current scope).
  3. As a result, your reputation increases.

So, I would say you do not trade reputation directly: instead, you trade currency in such a way that other honest participants increase the reputation score they hold of you. It all depends on the conversion mechanism from heterogeneous units of accounting to homogeneous units of work[1]. Whether it is fair to say this corresponds to meeting the buyer's price, I don't know -- it is probably subjective: paying your own Sybil is most likely not that impressive to others.

In conclusion, in a roundabout way:

Does this make sense?

Yes.

Lastly:

Are you also considering the allocation policy? Or is the scope limited to determining reputation of others?

Given my current 6 page limit, this will be limited to the latter: determining reputation of others.

[1] For example, you may define 10 Mb download equals -10 work, 10 Mb upload equals +10 work, 1 Bitcoin equals +1000 work. This will probably also fluctuate continuously due to complicating temporal factors, like current market prices or the local legal ramifications of uploading.

from tribler.

qstokkink avatar qstokkink commented on May 12, 2024 1

Update: the #31 (comment) paper is now published. https://doi.org/10.1145/3428662.3428790

from tribler.

synctext avatar synctext commented on May 12, 2024

Churn and NAT traversal are critical concepts. All peers keep open connections with several neighbors to fix this. We need an algorithm to traverse such live edges.
Bartercast_edges_online-offline

from tribler.

synctext avatar synctext commented on May 12, 2024

This will be moved earlier in the roadmap. Multi-chain will evolve with numerous small steps into this envisioned perfect system.

  • append-only datastructure
  • payout to seeders
  • isolated Gumby experiment with various peers
  • visualization of the multi-chain graph
  • crawler for dataset collection
  • stress experiment generating a million multi-chain records.
  • deploy inside Tribler
  • payout to hidden seeders and two intermediary proxies (1 private GB : 5 GB of network load)
  • ratio tracking inside GUI
  • keep-alive to top-N barter partners
  • random walk using edge traversal
  • peers start collecting barter records
  • hidden services biased towards reliable bandwidth providers
  • ratio enforcement on proxy requests and hidden seeding

If we go beyond one million users it is possible the database becomes a bottleneck. We will first analyse, modify and tune the algorithms. If this has proven to be insufficient we evaluate libraries such as: overkill approach for billions of edges or Python graph database (inactive)

from tribler.

synctext avatar synctext commented on May 12, 2024

branching concept
Two main issues with current multi-chain design is the exclusive nature of signatures and the need for a blocksize. We want to get rid of 1) bloat and 2) blocking.

First is the simple addition of compression. Consider the following use case. Two peers exchange 1 GByte of data. Multichain gets very bloated if each 25MByte requires additional signatures and another sequence number in a new record. We can compress entries by allowing two participants to sign the same record with the same sequence number with the same participants, but with a higher amount.
Blocksize matters less when overwriting is allowed, as previous records are replaced. This sadly removes elegance and simplicity, as signing information with the same sequence number no longer constitutes lying.

Non-blocking.
Sequence numbers are no longer unique in this design modification. Each peer keeps a sequence number, but for concurrent events it is re-used. Instead of calling it a double spending attack, we call it "branch racing". Only re-connected branches are valid. We still want to preserve the single-chain per peer design principle.
20160127_110952

Show above are thee peers A,B, and C with their Multi-chain records. Their multichain is depicted starting at sequence numbers 21, 3, and 8 respectively. For peers A and B two records are replace with a final 300 / 0 units exchanged, final record between A and C is 64 / 8 units. The key point is that the final sequence number for the B-C record is 5, merging it back into a single main chain.

Requirement for attack resilience: merging back does not require anybodies cooperation. So use a weld-node: point where total up/ down is updated and Merkle hashe of: main branch and side branch is taken. Only signed by owner.

from tribler.

pimotte avatar pimotte commented on May 12, 2024

In my opinion, we should avoid branching at all costs.

The problem we are solving with branching is two-fold: First the blocking issues on timeouts, second the infinite growth of the datastructure.

The second problem can be easily resolved by having some sort of secondary archive multichain. In this multichain, you are allowed to archive a transaction older than X period of time. This transaction will be your new genesis block and others are not allowed to ask you for earlier transactions (so you can delete them).

The first problem can be resolved by scaling the moment of signature request by the bandwith available. Incoming signature requests will never cause a problem for you, only for the other party, if you currently have an unanswered signature request outstanding. Hence, a client should only ask for a signature from someone else, if the probability of receiving another request before receiving a response is sufficiently low. (This can be estimated using simple probability/queuing theory).

from tribler.

pimveldhuisen avatar pimveldhuisen commented on May 12, 2024

On archiving:
Does this not make it possible to lie about your history before time X? Or do we intend to base reputation only on the period from now to X? This might be viable if X is sufficiently large, say a few years, but a long history might be useful for Sybil resilience.

On request timing:
A very elegant idea, but I think it is not guaranteed to work. What if there never comes a time when the probability of receiving another request before receiving a response is sufficiently low ?
If a client interacts with N peers at the same time, and these peers are replaced by new peers at a rate of n per minute, the client will need to sign at least n blocks per minute, even if it never signs while a transfer is still ongoing. If the signing round trip takes t minutes, n is limited by 1/t. Let's say a round trip takes two seconds, then n is 30 new peers per minute. There is a sort of load balancing by making busy nodes do the relatively cheap responding and the quiet nodes the relatively expensive requesting, but this n peers per minute limit is still there on average. Question: is this a viable limit?

from tribler.

pimveldhuisen avatar pimveldhuisen commented on May 12, 2024

Alternative concept: half-signed only, or optimistic requesting

Another way of going around the blocking problem is to base the chain on only half signed blocks.

  1. You note down your interaction in a block and sign it.
  2. You can then base your next block on the hash of only the block with your signature. This means there is no blocking.
  3. You request your peer to sign the same interaction in his chain. For him too, there is no blocking.
  4. The peers sends his block back to you, giving you irrefutable evidence of the interaction.

The peer can refuse to create the corresponding block on his chain, but this is no different from him refusing to sign your block. You can add fake blocks to your own chain, but without evidence they are not valid.

from tribler.

synctext avatar synctext commented on May 12, 2024

ToDo: merge major/minor seq idea, half-signed+revert.

from tribler.

Captain-Coder avatar Captain-Coder commented on May 12, 2024

In a session with @synctext, we discussed the half-signed only and branching options. He was especially concerned about having the vertical compression (updating transactions) and the difficulty with the reuse of sequence numbers. This could be solved by introducing a minor sequence number.

img_20160204_154521

(Alternatively you could accept a gap in the sequence numbering, as long as it is not decreasing. It's the hashes that provide the real linkage anyway. Same result, you never use a sequence number twice.)
But what do you do if one of the partners suddenly goes offline (or refuses to sign your next block)? Lets say C refuses to sign the last block in this example. @synctext noted that it would be possible to have A update its last block to refer to the last block of the B-C interaction. Resulting in this graph:

img_20160204_155238

This saves you from having to forgo all the credit from interaction B-C. Note that A will only update the block if it is still the head of the chain of A for obvious reasons. If A is not able to cooperate then you have to accept that one of the branches cannot continue. Probably the one with the smallest credit.

I'm thinking that nodes will have to be fair and open about all this parallel stuff and allow that "unmergable" transactions and any replaced/overriden blocks be harvested by a crawler. Because what if C secretly decides to use 3.1 or 4.0? It would look bad if B denied any knowledge of this block.

Also this would still combine with PimV.'s half-signed as default proposal.

from tribler.

synctext avatar synctext commented on May 12, 2024

design dream 1: if you do not follow the protocol, you never get any of the benefits.
design dream 2: protocol violations by design result in an irrefutable "proof-of-misbehavior".
design dream 3: double spending attacks are easily detected afterwards (prevention==costly).

branching: signing a certificate with the same sequence number (major+minor) constitutes a double spending attack, instant banning. An elegant and easy mechanism. This is the key reason for sequence numbers, they become a coin that can only be spend once or represent 1 truth. Append-only ensures any record freezes past behavior. A single record represents an exact position in the chain, impossible without sequence numbers.

from tribler.

pimotte avatar pimotte commented on May 12, 2024

The whole problem with having any branching option is that the "design dream 2" mentioned above is lost. The fact that a client has multiple branches means that they cannot be forced to rebase or merge it back into their main branch. This means that there exists a "sidebranch hiding attack", in which a client cannot be forced to divulge a side-branch existing. To make up for this, there would need to be some mechanism that forces user to merge or rebase branches. However, this means that proofs of misbehavior get much more convoluted.

My proposal for the compression is to be fairly lax with sending signing requests, scaling blocks with both the number of interactions with that person and the bandwith of the originator.

If this during the experimental phase turns out to yield too much growth still, or too many unsigned/dropped transactions, because of the block size, I would be in favour of exploring a secondary datastructure where we consolidate some of the information. But I would like to see if the compression is an actual problem before moving towards those kinds of solutions.

from tribler.

pimveldhuisen avatar pimveldhuisen commented on May 12, 2024

For the alpha version of multichain:

  • Do not use branching: the costs seem to outweigh the benefits
  • Non-blocking multichain: base your chain on the hash of only your fields. Collect the other half as evidence
  • Sign a block for the remaining amount on the closure of the circuit

from tribler.

pimveldhuisen avatar pimveldhuisen commented on May 12, 2024

Another mayor flaw that needs fixing before alpha:

  • Currently when a peer receives a block it will sign it, but not subtract the included amounts from the pending totals. Combined with the sign anything policy, this means every MB will be signed twice.

from tribler.

synctext avatar synctext commented on May 12, 2024

future approach:

  • split mechanism and policy
  • tunnel comunity gives stats to a TrustModule
  • Multichain is a mechanism without a scheduler
  • TrustModule decides to request a signature

from tribler.

synctext avatar synctext commented on May 12, 2024

As of today we have a design freeze of multi-chain. It is far from feature complete, merely a dry run of a concrete design. Accurate accounting of the totals is still out-of-scope. We will create a stable implementation of this design and include it in Tribler.

from tribler.

Captain-Coder avatar Captain-Coder commented on May 12, 2024

I was having a good brainstorming session about the "parity for contributions" problem where the number of relays is not fixed. Here is what I came up with:

Assume a seeder S, who has a proxy infront of him and a rendevous proxy (Ps and Ps'). The downloader (D) only has a rendevous proxy (Pd') to hide behind. In the naive approach of payout on this setup, the seeder asks for his true up and down. Each proxy then tacks on its total and thus the total download cost gets billed to the downloader. There are five downsides to this scheme:

  1. Downloader gets billed for the anonymity of the seeder. Since S can control the number of proxies to use on his side (Ps and Ps' in this case). The D might be confronted with an unexpectedly large multichain bill. D might have assumed S to only use Ps' and not have Ps (or even more of them) in between as well.
  2. The payment risk increases down the chain of proxies. If D decides not to pay the bill because it is abnormally large, Pd' has to absorb all the risk. This can seriously effect the balance of Pd'. Given that this problem exists, any proxy along the chain can decide not to sign for fear of having to absorb the risk. By not signing, a single other node in the network will not work with you anymore, but given the abundance of other nodes, this might be a worthwhile tradeoff against running a large risk. This fear along the proxy chain is the opposite of the trust we want.
  3. S is exposed as a seeder to Ps (but only after the fact), since S will ask for up&down very close what Ps has relayed. This is contrary to the assumptions of the tunnel community, namely that any proxy cannot discriminate between a circuit extension and a circuit initiation. On top of that any node along the chain can make a fair guess as to the length of the circuit leading up to it.
  4. Keep alive. All the peers (S, Ps, Ps', Pd' and D) will have to remain responsive to requests in order for the multichain request to propagate all the way to D to be billed. If a peer disconnects (Ps') the whole chain is broken. Per design Ps only knows S and Ps', so there is no way to route around the defunct Ps' to Pd'.
  5. Seeder fraud incentive. If S sends a multichain request for double (or tripple) his true upload amount, Ps can only assume that the previous node is a proxy who tacked on its up&down. Thus it is undetectable if the seeder is getting paid double or not.

Some initial thoughts on remedies might go as follows:

  • To solve problem 3, S might ask for a random odd multiple of its true upload to disguise that it is a seeder. This is directly contrary to 5, and again D might be confronted with an unexpectedly large bill.
  • Wait for the whole chain, to solve problem 2. For example Ps waits on Ps' before signing the request of S. However this exacerbates problem 4. And in case of none payment of D, all the proxies along the chain might not sign and thus never cooperate again. The only solution for this is to have a proxy absorb the risk, leading us back to problem 2 again.

To me it is obvious that propagating the true cost of downloading along the chain is not feasable nor elegant.

An alternative solution outline that I have thought about is this:
(Note: I haven't fully thought out the implications to identity management for the tunnel community)

The S and D can sign directly to each other through the circuit. Next the relays can sign the relay traffic among themselves (hops Ps-Ps' and Ps'-Pd'). However this shorts Ps and Pd' since they don't get S-Ps and Pd'-D respectively. However they are the inverse of each other, so perhaps a scheme can be devised that "closes the loop" such that Ps and Pd' sign eachothers outstanding up and down. This also disconnects the proxies from the seeder&downloader in multichain. S and D can include themselves in the proxy loop, since they can claim to be just a proxy. If S and D do not include themselves in the proxy loop they indirectly expose themselves to Ps and Pd' respectively as seeder and downloader, atleast there is no public record of this. A downside of this scheme is that the proxies are neutrally balanced, and since there is no public recognition for relaying in this system, there is not much incentive to relay traffic.

To fix this, multichain could be extended with new counters named "relay" and corresponding "relay_total" (or have a secondary multichain instance). The point of this counter would be to give recognition to the relays. However this does not combine perfectly with the proxy loop signing scheme since S and D then also get relay credits. So exposing S and D might be needed on the first and last proxy in the chain, such that they can eliminate them from the proxy signing loop.

This last bit is a rather fundamental problem, what ever scheme is devised S and D can always claim to be or present themselves as just a proxy, that is the point of annonimity. However to prevent them from getting credited (or paid or counted) as a relay, their true role will have to be exposed. The conflict between these two facts needs to be resolved. @synctext proposes to do this by having fixed length proxy chains.

from tribler.

synctext avatar synctext commented on May 12, 2024

This week the alpha version of multichain got merged. Solid progress, this is 2 months after the idea of an alpha took root. We assume no redesign is needed.. The features for the next Beta 1.0 release are:

  • Limited crawling. Peers start collecting multi-chain records from peers that helped them. Simple linear downloading. Warning: limited crawling rate to avoid attempting to crawl Exit Node Servers.
  • Keep-alives. Add resilience to churn and keep open connections in the multi-chain community to 10 random prior peers which helped us (e.g. upload).
  • Strict checks. Add the @Captain-Coder branch and new database storage structure with strict checking and audits before storage.

Beta 2.0:

  • Live Edges. Get introduced by peers to the people that helped them. Random walk across live edges of the interaction graph. Use the keep-alive feature to traverse the time-ordered interaction graph and collect multi-chain records. Warning: ensure rate control to keep the database from overflowing.

from tribler.

synctext avatar synctext commented on May 12, 2024

ToDo on the foundations of trust:

from tribler.

synctext avatar synctext commented on May 12, 2024

Reputation systems are linked to both game theory and smart contracts. Our upcoming 'smarter smart contracts' work requires enforcement mechanisms. This is brought together in self-enforcing protocols :

from tribler.

synctext avatar synctext commented on May 12, 2024

#67 from 2nd April 2013 seems to be the earliest public record of TrustChain / MultiChain / BarterCast3.

from tribler.

synctext avatar synctext commented on May 12, 2024

From above initial 15 March 2013 issue text (emphasis added):

friendship as an edge property to strengthen the trust overlay against sybils

Our goal: a universal mechanism to create trust.

With our Trustchain IETF standard #2087 we moved beyond 2007 background of primitive ledgers and accounting of contributions. Bitcoin and the wave of ICOs created global attention for our type of research. Our ambition level is now to invent a generic mechanism to create trust. Companies like eBay, Uber, and AirBnB use 1 to 5-star ratings of each transaction to calculate reputations and create trust. Our goal is to create a public utility of these proven star-rating mechanisms.

related work
"In recent years, research programs on trust have been extremely numerous and diverse in terms of both theoretical and methodological approaches, and empirical applications. Moreover, research on trust has spread widely across disciplines."Barrera, The Social Mechanisms of Trust

Defining trust. "a particular level of the subjective probability with which an agent assesses that another agent or group of agents will perform a particular action [an action that is beneficial or at least not detrimental], both before he can monitor such action, [...] and in a context in which it affects his own action" and "trust in the intentions of others not to cheat us and in their knowledge and skill to perform adequately over and above their intentions" from 1988 Gambetta book. TRUST - making and breaking cooperative relationships

Cardinal work: Evolution of indirect reciprocity
"Natural selection is conventionally assumed to favour the strong and selfish who maximize their own resources at the expense of others. But many biological systems, and especially human societies, are organized around altruistic, cooperative interactions. How can natural selection promote unselfish behaviour? Various mechanisms have been proposed, and a rich analysis of indirect reciprocity has recently emerged: I help you and somebody else helps me. The evolution of cooperation by indirect reciprocity leads to reputation building, morality judgement and complex social interactions with ever-increasing cognitive demands."

"The theory of indirect reciprocation explains the evolution of cooperation among unrelated individuals, engaging in one-shot interaction. Using reputation, a player acquires information on who are worth cooperating and who are not. In a previous paper, we formalized the reputation dynamics, a rule to assign a binary reputation (good or bad) to each player when his action, his current reputation, and the opponent's reputation are given. " The leading eight: Social norms that can maintain cooperation by indirect reciprocity

"These questions about the evolution of cooperation do not bear directly on the issue of trust, though they may give pause to anyone who supposes that trust is required for effective cooperation (defined simply in terms of working with and assisting others). To maintain such a position it would be necessary to argue that, for example, honey bees trust each other. Even though the study of cooperation in animals seems irrelevant to an understanding of trust in humans, careful analysis of the conditions in which cooperative behaviour is expressed suggests that many animals are exquisitely sensitive to the behaviour of others. This observation suggests an explanation for the evolution of the mental state that we recognize as trust in ourselves. Discussions about evolution also draw attention to the explicit distinctions which biologists are forced to make between what animals do and the immediate and long-term consequences of their behaviour. " The Biological Evolution of Cooperation and Trust

Transaction Cost Economics, Nobel Prize. 2009

Trust is a risk for opportunistic behavior for your transaction partner. The belief that the contracting partner will not defect for short-term gains or lack of exact knowledge on the future long-term gain.
Integrating Variable Risk Preferences, Trust, and Transaction Cost Economics

The Logic and Limits of Trust

Experimental results from 1995. People trust each other much more then rational thinking suggests. 'predisposition towards trust' "We designed an experiment to study trust and reciprocity in an investment setting. This design controls for alternative explanations of behavior including repeat game reputation effects, contractual precommitments, and punishment threats. Observed decisions suggest that reciprocity exists as a basic element of human behavior and that this is accounted for in the trust extended to an anonymous counterpart. A second treatment, social history, identifies conditions which strengthen the relationship between trust and reciprocity." Trust, Reciprocity, and Social History

'How can people promote the trust necessary for efficient exchange when individuals have short run temptations to cheat?' "A good reputation can be an effective bond for honest behavior in a community of traders if members of the community know how others have behaved in the past - even if any particular pair of traders meets only infrequently. In a large community, it would be impossibly costly for traders to be perfectly informed about each other's behavior, but there exist institutions that can restore the effectiveness of a reputation system using much less extensive information. The system of judges used to enforce commercial law before the rise of the state was such an institution, and it successfully encouraged merchants (1) to behave honestly, (2) to impose sanctions on violators, (3) to become adequately informed about how others had behaved, (4) to provide evidence against violators of the code, and (5) to pay any judgments assessed against them, even though each of these behaviors might be personally costly. " THE ROLE OF INSTITUTIONS IN THE REVIVAL OF TRADE: THE LAW MERCHANT, PRIVATE JUDGES, AND THE CHAMPAGNE FAIRS

Rich-get-richer effect is deeply embedded in a reputation-based economic system. The highly trusted nodes are the least risky to transact with. Agents with weak trust never have an opportunity to prove themselves. This has again a broad impact in an economy and society at large. See for instance, the very simplistic social meritocracy simulations in "Talent vs Luck: the role of randomness in success and failure"

Freeriding is connected to the Nobel Prize in economics. "Ostrom’s work suggested, contrary to conventional thinking, that communities are able to self-organize in ways that punish those who free ride on the common resource. Examining dozens of real-world arrangements, she found instances of communal ownership that worked—that is, that did not lead to the tragic outcomes envisioned by Hardin—as well as ones that did not. Were there systematic differences? Yes, and interestingly, the ones that worked did have a kind of property rights system, just not private ownership. Based on her work, Ostrom proposed several guidelines, which the Nobel committee highlighted, for managing common-pool resources. " Nobel Prize in economics Ostrom

Computer Science view with a trust model from 2003. Nice definition and view of trust and risk. "Trust is also situation-specific such that trust in one environment does not directly transfer to another environment and as a result a notion of context is necessary. Despite this situational nature, there is some agreement on a dispositional aspect of trust as a measure of one’s propensity to believe in the trustworthiness of others. The literature also highlights the dynamic properties of trust in that trust is self-preserving and self-amplifying, increasing through periodic successful interactions and degrading through disuse or misuse. Trust is inherently linked to risk; there is no reason to trust if there is no risk involved. This relationship implies that higher risk means cooperation is less likely to occur, unless the benefits of interaction are worth the risk. Reasoning about trust therefore allows entities to accept risk when they are interacting with others."
"The aim of the trust model [1] is to provide formal techniques for studying properties of trust based systems. Our formal model focuses on the set T , the set of trust values, whose elements represent degrees of trust."
"The trust needed for an interaction depends on the risk involved. This gives appropriate security
in pervasive environments, without calling for excessive trust in straightforward cases." Using Trust for Secure Collaboration in Uncertain Environments

Early 2001 model work from MIT, Lui. One of the first conceptual models for trust. "To model the process of opinion sharing among agents, the concept of an encounter is required. An encounter is an event between two different agents (ai, aj) such that the query agent ai asks the responding agent aj for aj’s rating of an object" Ratings in Distributed Systems: A Bayesian Approach
Other 2001 work: social reputations, the default reputation of an agent is the reputation of the social group it belongs to. REGRET: Reputation in gregarious societies

Real world systems. "eBay’s feedback system documents successful transactions, but often fails to inform users of unsuccessful ones. A data set containing information from 3776 auctions I collected from eBay’s platform, gives reason to assume that the feedback system exhibits a positive bias due to the nature of mutual feedback that is made public instantly. Sellers and buyers are able to hold feedback hostages by refusing to leave feedback until the opposite party has provided a report." Effects of Reputation Mechanisms on Fraud Prevention in eBay Auctions

"EBay claims that users face only a 1 in 10,000 risk of fraud. Evidence from the FBI shows the rate is
more like 1 in 100, or one hundred times greater than the company alleges. Despite the usefulness of
eBay’s feedback mechanism buyers still face substantial risk of being defrauded. This paper details
the level and dimension of online fraud, examines the history of fraud on eBay, followed by a
discussion of feedback and its attendant limitations." ONLINE AUCTION FRAUD AND EBAY

Scientists have been building trust systems for decades, yet never succeeded to create a breakthrough. Research system from 2003, could calculate trust of emails. " The so-called "Web of Trust" is one of the ultimate goals of the Semantic Web. Research on the topic of trust in this domain has focused largely
on digital signatures, certificates, and authentication. At the same time, there is a wealth of research into trust and social networks in the physical world. In this paper, we describe an approach for integrating the two to build a web of trust in a more social respect. This paper describes the applicability of social network analysis to the semantic web, particularly discussing the multi-dimensional networks that evolve from ontological trust specifications. As a demonstration of algorithms used to infer trust relationships, we present several tools that allow users to take advantage of trust metrics that use the network."
"TrustBot is an IRC bot that has implemented algorithms to make trust recommendations to users based on the trust network it builds. It allows users to transparently interact with the graph by making a simple series of queries." Trust Networks on the Semantic Web

Nobel prize of 2001. Trust and reputation information suffers from information asymmetry: information is often incomplete and one-sided. "Trust is a catalyst in many buyer-seller transactions, and it can provide buyers with high expectations of satisfying exchange relationships (Hawes et al. 1989). Koller (1988) argues that trust is a function of the degree of risk inherent in a situation. Trust is especially critical when two situational factors are present in a transaction: uncertainty (risk) and incomplete product information (information asymmetry) (Swan and Nolan 1985). Many researchers have argued that an understanding of trust is essential for understanding interpersonal behavior in economic exchanges (e.g., Doney and Cannon 1997, Eisenstadt 1986, Hirsch 1978, Shapiro 1"
"Most buyer-seller relationships are characterized by information asymmetry since the seller usually
possesses more information than the buyer does about the quality of the product or the service
(Mishra et al. 1998). The fact that buyers do not have complete information about sellers' actions
creates the well-known problem of information asymmetry (Akerlof 1970), which may give rise to opportunistic behavior." Evidence of the Effect of Trust Building Technology in Electronic Markets: Price Premiums and Buyer Behavior

role of gossip to spread reputation-related information

very good and extensive overview of prior related work (only in 2009 version). "Decentralized Reputation Systems have recently emerged as a prominent method of establishing trust among self-interested agents in online environments". "Several papers provide detailed background concerning reputation systems [20,23,26]. Several such systems are used in practice, so it is possible to observe their actual real-world impact. A prominent reputation system (and one of the earliest) is that used by eBay; the efficiency of eBay-like reputation systems has been studied in [15]. This is an extremely simplistic reputation system, which has several drawbacks [30]. For example, a user may misbehave in only a fraction of his transactions, and still have an increasing total score. Nevertheless, it has been shown [30] that the system works surprisingly well—reputation profiles are predictive of future behavior.
Other research has analyzed manipulations of general reputation mechanisms. Friedman and Resnick [18] have discussed the effects of cheap pseudonyms. When agents can enter the system using pseudonyms, and the cost of recreating an identity is cheap, agents that have a stained reputation may easily shed it. The authors have considered several solutions to this problem: disallowing anonymity, entry fees (which make pseudonyms more expensive), and using a central authority for irreplaceable (“once-in-a-lifetime”) pseudonyms. However, each approach has major drawbacks. Sybil attacks are discussed in [10], where an agent creates many false identities in order to boost the reputation of it primary identity" Gossip-based aggregation of trust in decentralized reputation systems

"Communication about social topics is abundant in human societies, and many functions have been attributed to such gossiping. One of these proposed functions is the management of reputations. Reputation by itself has been shown to have a strong influence on cooperation dynamics in games of indirect reciprocity, and this notion helps to explain the observed high level of cooperation in humans. Here we designed a game to test a widespread assumption that gossip functions as a vector for the transmission of social information. This empirical study (with 14 groups of nine students each) focuses on the composition of gossip, information transfer by gossip, and the behavior based on gossip information. We show that gossip has a strong influence on the resulting behavior even when participants have access to the original information (i.e., direct observation) as well as gossip about the same information. Thus, it is evident that gossip has a strong manipulative potential. Furthermore, gossip about cooperative individuals is more positive than gossip about uncooperative individuals, gossip comments transmit social information successfully, and cooperation levels are higher when people encounter positive compared with negative gossip." Gossip as an alternative for direct observation in games of indirect reciprocity

Role of language, gossip and cooperation

DUNBAR
Gossip is key to social networks and building trust. "Conversation is a uniquely human phenomenon. Analyses of freely forming conversations indicate that approximately two thirds of conversation time is devoted to social topics, most of which can be given the generic label gossip. This article first explores
the origins of gossip as a mechanism for bonding social groups, tracing these origins back to social grooming among primates. It then asks why social gossip in this sense should form so important a component of human interaction and presents evidence to suggest that, aside from servicing social networks, a key function may be related explicitly to controlling free riders. Finally, the author reviews briefly the role of social cognition in facilitating conversations of this kind." Gossip in Evolutionary Perspective

The virtues of gossip: reputational information sharing as prosocial behavior

exclusion is effective

Cooperation under the threat of expulsion in a public goods experiment Even the ability of exclusion is already breathing honesty and preventing freeriding..
"In a public goods experiment with the opportunity to vote to expel members of a group, we found that contributions rose to nearly 100% of endowments with significantly higher efficiency compared with a noexpulsion baseline. Expulsions were strictly of the lowest contributors, and there was an exceptionally strong fall-off in contributions in the last period, when the expulsion threat was unavailable. Our findings support the intuition that the threat of expulsion or ostracism is one device that helps groups to provide public goods. "

from tribler.

synctext avatar synctext commented on May 12, 2024

We will pioneer a new approach for creating trust among a set of mistrusting parties. Our universal mechanism builds upon the trust research from the Tribler lab of the past 20 years.
Success would enable the Internet to evolve from a network for bits to a network of trust.

Our audacious ambition is to establish a new scientific field, we call Trust Design Theory.
Trust Design Theory provides a coherent framework to design trustworthy online communities composed of mutually mistrusting people of arbitrary size, which have a process for democratic decision-making, the ability to punish those who freeride on the community or are dishonest in order to: manage a marketplace,
govern a resource in a sustainable manner, schedule activities, or achieve a common purpose in general.

from tribler.

vandenheuvel avatar vandenheuvel commented on May 12, 2024

Problem description from a game theoretic perspective for mathematicians: pdf and link to the repository.

from tribler.

synctext avatar synctext commented on May 12, 2024

Trust data is always under attack

The scientific AI or agent community has always been focused on formal trust models and calculating trust. Turns out that it is a harder problem to spread and protect information on who is trustworthy and who is not. The academic milestones consists of the few scientists who helped solve that more difficult
problem without requiring a trusted central party:

Agorics papers - 2001 very early discussion of incentives, trust and markets

from tribler.

synctext avatar synctext commented on May 12, 2024

Our strong reputation and trust framework requires documented attacker assumptions. Especially:
"Strong Existential Unforgeability under Chosen Message Attack"

After years of work on Buddycast, BarterCast3, Multichain and now Trustchain our work needs to mature and be meticulously documented. Deployment is not enough. Do we protect against this, given our flexibel Trustchain protocol, automated signed responses, and large freedom in the semantics of the message content?

Negative result theorem: An attacker can by regulating their bandwidth fully choose the signed Trustchain message composition to the extent of it being a security vulnerability?

from tribler.

synctext avatar synctext commented on May 12, 2024

We also need to document our "design principles" (ToDo make axiomatic):

  • Reputations establish the prevalence of cooperation based on reciprocity
  • Reputation is rewarded with reciprocity
  • Reputation is awarded for contribution of value
  • Reputation enforces net-positive contributions
  • Reputation starts from zero
  • Reputation is not transferable
  • Reputation is an estimate for non-Sybil probability
  • Reputations have limited forgiveness against misunderstandings and errors
  • Reputations do not represent the amount of value contributed (that is contribution accounting)
  • Reputations do not need to be accurate to prevent freeriding
  • Reputations do not need to remember everything
  • Reputations do not need to elect a global leader within the population (leaderless)
  • Reputations do not need to reach global consensus to cooperate (consensus-free)

(can also be seen as an economic IOU (I Owe You) system, instead of reputations)

from tribler.

synctext avatar synctext commented on May 12, 2024

After the summer of 2020 my plan is to form a small dedicated team around this issue. Have weekly team meetings, etc. @alexander-stannat has joined us for a few months and can help with the theoretical foundations. Draft milestones:

  • detailed design documented in .md style or technical report
  • enforcement of net-positive contributions and deployed in Tribler.
  • reputation functions are source code plugins which can be altered by the user (e.g. FBase inspiration)

from tribler.

qstokkink avatar qstokkink commented on May 12, 2024

Here is my consolidation of the list of #31 (comment) and how I believe each rule or property can be shown to be respectively required and derived. I give these proofs/rationalizations here very informally (the mathematical equivalents will be used in the paper). None of these proofs should be particularly shocking and most of them are also probably already available in related works.

Definition

Reputation is the scoring of historical records belonging to an identity (it is not the accounting mechanism itself).

Rules

1. Reputation must start from zero.
Proof ~ otherwise you have an easy Sybil attack

2. Reputation must be non-transferable.
Proof ~ otherwise the fundamental definition is broken (scoring of records belonging to YOUR identity) and you get full-on anarchy

2. Reputation must transfer over to others
Proof ~ see #31 (comment)

3. Reputation only takes records supported by honest users into account, i.e. the contribution of Sybil identities is negligible.
Proof ~ otherwise it becomes a measure of who can best attack the network

4. Reputation can only be calculated for a set of records with commutative value.
Proof ~ otherwise you end up with a multidimensional reputation - we can show that this can NEVER be accurate to ANY granularity

5. Reputations have limited forgiveness against misunderstandings and errors.
Proof ~ forgiveness means reputation from nothing - i.e. Sybil attack potential again

6. A strictly postitive increase in reputation must result in a (non strict) increase in reciprocity/cooperation.
Proof ~ Alexander’s thesis

7. Honest behavior must result in a (not strictly) positive change in reputation and no difference in the sum of global reputations.
Proof ~ zero-sum game, also Alexander’s thesis

8. Malicious behavior must result in a decrease of reputation and a decrease in the global sum of reputations.
Proof ~ sort of obvious, otherwise you get away with "bad" behavior

Properties

1. Reputations do not need to be accurate.
Proof ~ they need to be accurate to the granularity required by the neighborhood ranking, the reputation score itself is only used to select a potential partner for future interaction

2. Reputations can be calculated without all information (i.e. reputation mechanisms can be subjective: leaderless and consensus free).
Proof ~ cite related work: random walks converge to the global value, e.g. personalized PageRank


For each of the rules I plan to experimentally show what happens to a reputation mechanism that violates it.

from tribler.

qstokkink avatar qstokkink commented on May 12, 2024
  1. Reputation must be non-transferable.

I'm going to backpedal on this rule after constructing the math. The complete opposite seems to be true:

2. Any change in accounting with a non-zero contribution must result in a change in reputation for all participants.

Essentially: reputation MUST transfer.


Long story:

There are three mechanims at play:

  • [Accounting] The accounting mechanism.
  • [Work] The work contribution mechanism.
  • [Reputation] The reputation mechanism.

The accounting mechanism "only" serves to capture events, it is a distributed log. These events are tied to identities and are indeed non-transferable. Neither the value of the events nor the semantics of the events need to be known, this log simply needs to be consistent.

The contribution (a.k.a. work) parses the events and gives them a score. Uploading 10 and downloading 10 may for instance lead to a scoring of 0 work. This mechanism must assign value/work/score, based on semantics (e.g. 10 download is equivalent to -10 work and 10 upload is equivalent to 10 work). Side note: this consolidation of work is where Artificial Intelligence would be useful if the events in the log are either (1) not bound to fixed semantics or (2) fluctuate in value (e.g. a market with different coins).

Lastly, your reputation is the normalization of all your work in the system, given the work of others (not necessarily based on the entire network's accounting, as shown by e.g. personalized PageRank). In effect, reputation only exists by virtue of your neighbors. Performing "10 work" in isolation means nothing, whereas performing "10 work" while all of your neighbors perform "2 work" does. Essentially, any non-zero work by any participant MUST impact other reputations, otherwise it is an accounting mechanism and not a reputation mechanism. The proof of which was actually given in @alexander-stannat's thesis.

from tribler.

qstokkink avatar qstokkink commented on May 12, 2024

Here is the current state of the mathematics (expertly MS Painted together). I'm currently not quite happy yet with the interconnection between the theorems: they do not work to some grand conclusion, they are just there.

from tribler.

devos50 avatar devos50 commented on May 12, 2024

Essentially: reputation MUST transfer.

Interesting observation. Intuitively, this is then a consequence of the transitivity property of trust where a change in reputation affects all nodes you can reach in the work graph.

So, reputation must be transferrable. What are your thoughts on making reputation tradable? I already had short discussions about this with others, but given your current paper scope, you're probably the right person to elaborate on this. I'm thinking about a mechanism where one can 'buy' edges in the work graph from higher trusted peers (there is most likely diminishing returns in getting more reputation so it could make sense to sell a surplus of reputation). For example, if we consider bandwidth transfers, the seller would sign for downloaded bytes from the buyer. The seller is then rewarded with some cryptocurrencies (using incremental payments). Now, such a system would offer an alternative way for new identities to bootstrap into the system, by buying edges with trusted peers.

The price of an interaction differs and should be set by the buyer. Conceptually, it's like a seller "vouching" for the trustworthiness of a buyer since malicious behaviour of the buyer will probably also effect the reputation of the seller.

Does this make sense? (I'm probably mixing some terminology here and there)

There are three mechanisms at play:

Are you also considering the allocation policy? Or is the scope limited to determining reputation of others?

from tribler.

synctext avatar synctext commented on May 12, 2024

I'm currently not quite happy yet with the interconnection between the theorems: they do not work to some grand conclusion

So, the hard work now begins of proving some strong property that results from this neat model. For instance, if the number of fraudsters/Sybil is limited to 🔣, it is guaranteed that cooperation will emerge. Or. Even a tiny minority which is honest we prove that the ecosystem eventually converges, they find each other, collaborate, and can sustainably withstand continuous Sybil or fraud attacks of any kind.

from tribler.

qstokkink avatar qstokkink commented on May 12, 2024

Small update: I'm now referring to the accounting mechanism as the accountability mechanism instead. That should lead to less confusion.

from tribler.

devos50 avatar devos50 commented on May 12, 2024

An "accounting mechanism" differs from an "accountability mechanism". Accountability, in my opinion, is the extent to which someone can be held responsible/accountable for their actions and behaviour after-the-fact. Accounting, however, is the recording of actions in a community as they are performed. Given these (informal) definitions, the term "accounting mechanism" seems to align better with reputation/trust.

from tribler.

qstokkink avatar qstokkink commented on May 12, 2024

I mostly agree, but I believe accounting has the subtle undertone of being a log of things holding value that are countable and transferable, which is not necessarily true for all events in the system. This is why I'm trying to find another term.

The proper terminology would probably be an eventually consistent log technology, but this is not really marketing-compatible 😃

from tribler.

synctext avatar synctext commented on May 12, 2024

next step could be a list of deployed reputation systems:

from tribler.

synctext avatar synctext commented on May 12, 2024

Key related work: A Taxonomy to Express Open Challenges in Trust and Reputation Systems
Lists all the challenges and in-depth analysis: Trust value representation, Information source, Sybil, white washing, slander, etc. Insightful comments on future work in 2012, still unsolved in 2020:

Proposals from the academic community are not always deployable and are usually designed from
scratch. Only in a very few cases do authors build on proposals from others. Hence, there is a
need for a set of sound, accepted principles for building trust and reputation systems. The design
space and limitations of mediated trust and reputation mechanisms should be explored and a set of
design parameters that work best in different settings should be understood. Formal models
of those systems in both monopolistic and competitive settings should be developed.

image

from tribler.

synctext avatar synctext commented on May 12, 2024

New axiomatic principles:

  • past actions are known, recorded tamper-proof, shared, and widely available.
  • the present is independent, unwritten, and uncertain

inspiration: "Pragmatically: the past is shared; the present is independent", from Jepsen intro to distributed systems
Agent-based model of 'open collaboration' with 3 user types: cooperators, reciprocators, and freeriders. Message: "it" works even when cooperators are a small minority.

from tribler.

synctext avatar synctext commented on May 12, 2024

Related work (non-axiomatic):

from tribler.

synctext avatar synctext commented on May 12, 2024

True! application-specific gossip forwarding functions. Generic approaches never get used or are inefficient.
Maybe same for reputations: application-specific reputation function. Generics just don't work.

from tribler.

synctext avatar synctext commented on May 12, 2024

3Es of Sybil resistance are: 1) Entry cost 2) Existence cost 3) Exit cost.
Consensus-based approach. May 2015 related work SF Bitcoin Devs: Asynchronous Byzantine Consensus Suitable for Decentralized Networks. Founder of proof-of-useful-work VC-funded startup, filled with former IBM people {dfinity}.

from tribler.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.