BrianPansky Wiki
Advertisement

When people you trust tell you who they trust, you might be able to rely on that information more than the recommendations of strangers (and vice versa). Trust aggregation is the attempt to connect the evaluations that other people make to the evaluations that you yourself make, in this way.

It's something I would want to include in my dream website.

How it works

It works fairly simply. Something like this:

  1. Everyone can evaluate anyone they know, saying how much they trust each of them (which can be translated into a percent).
  2. Using all of these evaluations between people (which will form a network), computations can be done to estimate how much someone might be able to trust a person they had never met (or heard of) before.

So how does that step #2 work? What are the computations? What are the limitations?

The computation requires an unbroken chain of evaluations between the user and the person they want an estimation for. The basic computation involves multiplying the percent trust scores together along the chain, and getting a result. Of course, often there will be many reports of a single person that will need to be combined. Not using a simple average, since each report itself may not be trusted as much as other reports. Here's a thing about handling multiple paths through the network. Since these numbers should be viewed as estimates, and they deal with probability, the computed results should realistically have an upper and lower estimate.

I have created a simple program to do this, and it works as I expected. A more advanced version would record more types of evaluations.  Not just "trust", but honesty, competence, safety, and whatever else. An important metric for the "aggregation" part of "trust aggregation" would be how much you trust someone to evaluate others correctly. If they are good at knowing who to trust, then they have done some work for you. It's a division of labor, many hands make light work.

What it's useful for

With automatically computed evaluations, you could set up automatic filters, and the systemic pressure on everyone could improve individuals too.

Cutting down on viral falsehoods, fake news, etc. (perhaps even allowing viral truths to have a better chance?)

Cutting down on unwanted interactions like harassment (perhaps even cherry-picking for the best interactions?)

Preventing "lemon markets"

Networking

What happens if we use it?

Trust Aggregators

So long as this whole idea is useful, it seems likely that specialized large-scale aggregators will emerge.  That is, there will be people (or groups) who try to be your #1 source for estimates.  They will try to make you trust them as much as possible, so that you trust their evaluations as much as possible.  That will put pressure on everyone who wants good reviews, and they will try to ensure they are in good standing.

Different aggregators will disagree with each other.  Think of any contentious political issue.  Is someone bad if they are in favor of abortion?  Is someone bad if they voted for Trump?  Some people think so, some people think the opposite is true.  What would happen?  Would trust aggregation make these conflicts worse, or better?

Addressing specific common concerns

  • "isn't this like China Social Credit or Sesame Credit or whatever?" [pretty much the opposite, this is more like what we all already do when we decide who to trust, just faster and more comprehensive]
Social Credit etc. My Trust Aggregation Proposal
basically "one judge" (government) everyone can make evaluations
the judge is immune to other's judgement everyone can be evaluated by everyone else
this judge will impose vast official consequences no official consequences built in
  • "what if people lie?" [they already lie, this system gives them consequences for it][the proverbial invention of fire, dangerous and useful][every advancement of communication and info processing. people can lie with spoken words, people can lie with statistics. doesn't mean we're better off not using spoken words or statistics]
  • "what if people are paid to lie?"
  • "isn't it easy for someone to coordinate a massive attack on the system?" (with bots or something) [no, the opposite, this is exactly the how you design a system to make that far more difficult than it currently is on other social media platforms. see citations (like this page about blockchains and wikipedia has a section on it) on the properties such a system needs to resist such attacks, and compare to whatever websites you currently use]
  • combinatorial explosion. Adding one more user (along with their set of evaluations) theoretically requires multiplying the total number of computations the system needs to perform. In practice, there are ways this can be cut down without much loss of usefulness and accuracy of results, but would it be enough? For example, the network simply never needs to be explored past the point where the reports coming in are no longer trusted. And, once a perspective of the network has been calculated for a given reporter, I'm pretty sure those results can be copied over every time those reporters are "asked" by the network crawling calculation of any other user, at least as far as that one reporter's contribution is taken into account (their perspective of the network becomes collapsed to conclusions, their "reports", and those reports can be packed up and placed directly into the other user's final calculation while being modified merely by the trust placed in that reporter). But since it may still need to compute most of the network from some perspective at least once, it might still be too much of a combinatorial explosion. To an extent this makes sense, because it's the kind of system that becomes vastly more valuable the more users are added to it. It isn't hard to imagine this having diminishing returns, however. These are practical and mathematical questions to work out (someone may have already done so), finding how much can be done.
  • "group-think/bubbles/echo-chambers, cults?" Will you just have an echo chamber?

I don't think so.  The trust scores you'd see wouldn't necessarily come as you would expect.  Sometimes a smart person you trust says something you wouldn't have believed if it had come from a stranger.  A trusted friend might even disagree with you.  They may be right or wrong, but they are definitely different than you are, to some degree.

But those examples are from our present world.  Would echo chambers emerge if trust aggregation was widely used for a long enough time?  Or would the whole world become more (and more?) unified together? Or no change positive or negative?

I suspect it would tend towards an equilibrium of higher and higher trustworthiness. Only weird cults would cut themselves off, as they do. The tech would also make fact checking easier, so bubbles of lies would be easier to burst.



  • "what about that episode of black mirror?" I haven't seen the episode or looked into it much, but from what I've seen I think it proposes a "quantification utility monster" type issue [see below].
  • "cancel culture?" already exists. but being able to very easily block out online harassment (and set up privacy) would make it suck less perhaps.
  • "awfully essentialist to evaluate a whole person instead of their actions" [it might be possible to frame it about actions instead?]
  • needs to be decent ways to "redeem yourself" and all that
  • "sounds awkward, anxiety inducing, possibly narcissism-inducing"
  • "technology can't solve social problems" [perhaps not every aspect, but helpful in some aspects. vague and would need a specific concern to look at. see other listed concerns]
  • similar to other democratic systems, there might be concerns about exploiting the ignorance and lack of expertise of the majority. "lowest common denominator", "race to the bottom". [i'm not sure "lowest common denominator" applies to a trust aggregation system, i just saw it applied to a simple one-way rating system, like regular reddit upvotes or amazon/imdb "star" reviews][always strange to see people seemingly express a rejection of democracy. my impression is that they want a short-cut to solve certain problems (like giving everyone no choice but to trust the people that you trust, without doing the hard work of convincing everyone that those people are more trustworthy), which is ironic coming from people who decry technology as incapable of solving complex social problems.]["freedom of choice VS securing the right outcomes" is always a contentious topic at the societal scale where "the right outcomes" are not agreed upon, and even scientifically are not easy to prove]
  • "these kinds of quantified measures create an extrinsic motivating system which will act something like a utility monster" [see stories about the pressures of creating youtube videos. some people would also point to regular jobs, using quantified measures of time, money, and other types of "capital"]
  • still have to trust the company or whoever is storing the data and computing the results to be doing it correctly, as advertised

See also

Advertisement