> How is that achievable?
The core ill is aggregated data, because that's what allows the mass in surveillance, data mining, etc.
The collection actions are almost immaterial. Without persistence they must be re-performed for each request, which naturally provides a throughput bottleneck and makes "for everyone" untenable.
If we agree the aggregated data at rest is the problem, then addressing it would look like this:
1. Classify all data holders at scale into a regulated group
2. Apply initial regulations
- To respond to queries for copies of personal data held
- To update data or be liable in court for failing to do so
- To validate counterparties apply basic security due diligence before transferring data (or the transferer also faces liability)
- To maintain a *full* chain of custody of data (from originator through every intermediate party to holder) so that leaks / misuse can be traced
- To file yearly update on the types, amount of data, and counterparties it was transferred to with the federal government that are made public
The initial impediment to regulatory action is Google, Meta, Equifax, etc. saying "This problem is too complex and you don't understand it."It's not. But the first step is classifying and documenting the problem.