Contributor / community trust - keyoxide?

Not sure if this has come up already, but given things are beginning to move when it comes to PRs, infrastructure experiments etc. would it be worth considering voluntary use of something like Keyoxide to tie together online identities?

It seems to allow multiple profiles, and also hashed/obfuscated proofs, so I assume I could have an ‘online’ profile and a separate IRL one, and link to e.g. my GH account from both, but without people easily being able to jump from one to the other? (People with a more security-oriented brain please confirm I’ve understood this right!)

This sort of solution feels like the sweet spot between some sort of awful “real names” policy which I’m sure we’d all hate, and doing nothing; at least being able to establish people are the same e.g. here and on Matrix, and maybe have some clear prior history of existing e.g. on GH might help to mitigate against supply chain attacks like we saw against xz-utils recently?

4 Likes

I use it and it is good.

3 Likes

aspe:keyoxide.org:T4WT3W6TSYLEXMF7ASR4HDQC5Y

:slight_smile:

1 Like

What problem would it solve for Aux?

I was thinking of these sorts of things:

  • “is this user I’m talking on matrix the same person as I’m talking to on discourse?”
  • “is the user submitting this PR the same person I was talking to on discourse?”
  • “is this user a real person, even if pseudonymous, as opposed to a nation state actor trying to backdoor a project like ‘Jia Tan’?”

Obviously the core people here and those with a particular security focus might recommend stronger processes, etc, particularly further down the road. I just felt that voluntary Keyoxide adoption might be a good fit at this stage, providing a bit more certainty while still allowing people to be pseudonymous.

1 Like

This issue is one of social trust. I haven’t seen a technology that actually solves this problem. Most ideas seem more along the lines of adding technical complexity - or Security Theater

I’m even gonna be as heretic as claiming the most feasible solution against these forms of social engineering is building strong human connections :wink:

To give an example of what I think a strong human connection could look like:

  • Everybody is free to participate in Aux, you just send in a PR, no matter if you operate under a pseudonym, anonymous or by your real name
  • SIG or committees members know each other well, possibly have met IRL - know each others IRL identities, maybe have created friendships or any other form of “strong” social connections. Even if they operate under a nick/ pseudonym online.
  • Some SIG/ committees member have verified each other with government IDs, have established the connection between those IDs and online IDs and expressed trust through documented procedures and maybe PGP key signing or other web-of-trust modalities.
    The Security Committee is considering this constellation.
6 Likes

Oh, agreed, it’s ultimately a human judgement, not something technology can completely take over. Doesn’t mean you can’t let it do a little bit of the legwork for you!

1 Like

Right now I think Aux should stay pretty lean/simple. I think github profiles and this discourse is good enough for verification.

But that’s a really cool tool I might use in other projects. Once Aux is bigger it might be good to have it for security/auth reasons later.

5 Likes

It’s similar to code signing on GitHub or whatever platform: you basically associate your accounts across platforms so people have some kind of idea that “yeah, this person is who they claim to be.”

2 Likes

Unless I overlooked that in the documentation, it doesn’t look like that.
If I say “I am X” and than put a signature onto that claim the signature does not improve the legitimacy of my claim, as I’m asking of you that you trust the claim solely based on information I gave to you.
It doesn’t matter how complex worded (or cryptographically enhanced) the claim is, all you have is my word.

Digital trust that can be verified to be more than a claim typically works on either

  • A trusted third party that has some authority vouching. This often builds some form of hierarchical trust. This principle is used for TLS certificates.
  • One or more individuals putting their reputation to my claim. E.g. the web-of-trust pattern used with PGP signing.

The TL;DR here
Adding crypto to information does not make it more trustworthy or more legitimate per se.

+1
Staying lean is IMO more helpful in the long run than adding complexity…

2 Likes

This thread is getting a little tired now, and at the risk of repeating myself: the idea is if someone’s earned your trust on one platform, then something like keyoxide can prove that a user on a different platform is also them.

You don’t need cryptography for this - you can (as I think @isabel does) put cross links to and from all your profiles back to a website, etc. Keyoxide just makes it easier to deploy and also to check.

Once there is any sort of cross-linking, it makes it easier to check out someone’s history in public forums, forge contributions, etc., which provides some evidence for making a judgement on whether they’re ‘real’, as opposed to a sock-puppet, nation-state actor, etc.

2 Likes

FWIW, I’ve already been maintaining a Keyoxide profile, and I’ve added this Discourse to it (had to update the expiry on my PGP key, so figured may as well). Not 100% sure how we could use Keyoxide profiles at the moment, but it’s an option for those who want it.

4 Likes

If nothing else it’s great for those of us with terrible middle-aged memories who can’t keep track of who’s who across different platforms :)

1 Like