Accounting: aka COUNT ALL THE THINGS!!!
It’s becoming more fashionable in “secure systems” circles to exhaustively log activity for later potential forensic analysis. And that’s good, but, I would advocate that any resource, whether it’s bandwidth, processing or storage, should be monitored for utilization with records kept of who used what when as the default design choice - only don’t do resource accounting when a “very good reason” compels you not to.
Why? Why “waste” resources / capacity on “excessive” accounting? To securely support anonymous and pseudonymous usage on the open internet.
First, let’s not get too concerned about the bits, they’re almost incomprehensibly inexpensive at the single user level. But, as systems scale to thousands and millions of users, the costs do become significant. The real cost concerns come when adversarial users, particularly automated adversarial users, start consuming large amounts of your resources in exchange for no, or negative, value to your system. Brute force denial of service attacks are the most obvious, but with anonymous AI powered bots a system can be overwhelmed with garbage that’s difficult to automatically filter (not exactly on-target, but conceptually near: Generative Adversarial Networks.)
So, how does accounting help?
First, assuming you don’t have 100% fully anonymous users, you can apportion capacity for your positively identified and established pseudonymous users. Anonymous users might only be granted 10% of any given resource, enough for genuine anonymous human users to get access, but not enough for AI bots and other bad actors to completely overwhelm the system. Various methods (content moderation is a big one) can identify bad actors and “keep the noise down to a dull roar” for legitimate users.
Thorough resource utilization monitoring also enables automated adding of capacity when needed, spinning down capacity when it’s not needed, and load forecasting to try to serve the users needs just in time as they utilize the systems.
Logging of resource usage can take various forms, from a simple FIFO ledger up through blockchains. One interesting (to me, at least) concept is a kind of free-market barter exchange of services among automated systems. In a federation of loosely cooperating small systems, a demand spike could be handled by load sharing. If your “front end app” is signed and recognized by a large group of small systems, during peak load demands many independent instances could step up and provide front-end interface service to users. Same for the “back end,” data replication and backup, etc. Why would a large number of system operators choose to serve your application? Maybe because they are being paid to do so? Whether in hard currency or exchange of services / goodwill, the servers who serve your application at your times of need can accrue credit which servers that you control might pay back to them at some point in the future.
I see a regulatory / tax liability trap waiting for systems that trade large amounts of hard currency equivalents back and forth at high frequency, but hopefully it can be argued that the barter is more symbolic and nothing of taxable value is being exchanged.
So, if you’re tracking who is using what when, and this data is helping you filter griefers from your systems and automate load sharing, where does this filtering and load sharing get done in practice? That will be Step Four: access controlled reverse proxies.