This isn’t precisely an answer to Sam’s questions. It’s based in theory rather than experience. Thanks for sharing that list of ‘off-the-shelf’ decentralization patterns. I hadn’t seen it before and it seems really useful. I’ve been reading Spritely’s documentation and trying to understand ocaps off and on for the last several weeks. I think Spritely has the biggest potential for impact in the areas of safety, consent, and moderation. I believe Matrix has correctly identified moderation as the most important problem for any social system. I think ocaps are particularly well-suited to these problems and I’ve been thinking about ways to allow users to control the interactions they have with others via ocaps.
There are several existing strategies for moderating interactions. The most basic is single-user level controls. Typically a user opts in to another’s posts by following or subscribing, and requiring approval of follow requests allows control of who can see one’s posts. There is also blocking and muting to prevent interactions, but usually at a single-user level which doesn’t scale well. Boosts can cause a feed to grow beyond the list of subscribed-to users, and posts can always be copied and shared beyond one’s approved followers. The social radius slider pattern is a coarser control, but is meant to be temporary. Blocking an entire instance is another coarse control that may catch good users as well as bad, but may be the only option for a user facing an onslaught of abuse. I see a need for intermediate-level controls that are fine enough to avoid punishing good actors for the actions of the bad, but also require minimal intervention by the user that they serve.
There are social systems that people graft on top of software protocols. Sharing and subscribing to blocklists and the #fediblock
tag are examples of this. Building these or capabilities like them into the protocol itself for seamless integration and automatic action seems like it would be helpful for the granularity and scalability problems mentioned above. I believe Matrix has done something like this but I’m not familiar with it or its capabilities.
Curating one’s list of followers is somewhat analogous to user sign up. Sign ups can be completely open, or require manual review and approval. Requiring an invitation from an existing user falls between these extremes. Lobsters uses an interesting variant of this where invited users are stored with a reference to the user that invited them, forming an invitation tree. If a user is found to be inviting bad actors, their ability to invite others can be cut off, and everyone who was invited by them or by someone they invited (etc.) can be removed or limited all at once.
There are surely other possibilities that I’m not aware of as well. Given the various possible strategies, I recently had the idea of allowing a user to choose between these various methods of approving users. On one extreme you have no restrictions, on the other you have individual approval. In between you maybe have a tree system where approved users can invite other users and so on, and any subtree can be restricted at any time. Another possibility is something like a web-of-trust where if a sufficient number of approved users mark another as trusted, then that user is allowed to send posts to the user or view posts by the user. If a user has low standards of trust, then their mark of trust doesn’t get as much weight. Implementing this in a way that’s intuitive, usable, and useful seems like quite a challenge.
I think cataloging these (and other) moderation and user control strategies would be useful but is maybe not what you had in mind. At a lower level, a useful thing to me would be a catalog of various examples of controlling access to an ocap. For example, enforcing exclusive but transferable access as in From Capabilities To Financial Instruments, audited access, an access tree as described above, and any other examples or ocap patterns.