That said the only way to insure all aspects of my digital self remain in my control is to never release information to anyone, b/c as soon as you give information away you have lost control of it. That isn't true. There are enforcement mechanisms within smart contracts such as collateral, reputation, etc. If there are leaks then the leaker is consequenced. They could lose both their security deposit as well as the most coveted thing they can own which is their reputation.
Just as there are non disclosure agreements there can be a distributed smart contract form. In fact I outlined how to do it somewhere on the Internet.
I beg to differ with you on this point luckybit, it is
true. Sure you can establish incentives to minimize
risk but the fact remains the only SURE way to keep information from spreading is to limit access, which starts at the source
. Reputation means nothing to psychopaths and the disreputable, and game theory is based on self interest, of rationally motivated people not the irrational and sadistic. Consequences don't stop leaks they just deter them.
Assuming Snowden is telling the truth, how much control did the government have over the info he was trusted to keep secret? Don't you think the access control, consequences etc used by the government were about as strong as they come? DId they work? Obviously not.
The limitation of any access control methodology is the people that use it. There is always a way to provide a greater incentive to overcome fear of retribution, ostracism, spoiled rep or other consequences. Game theory fails for those following an irrational believe system, or those who's values my transcend their own safety or personal desires. People who don't care about their life or who may be terminal or who value a loved one more than their own life are tough to design disincentives for.
Life is risky. There is no way to remove all
risk, nor should that be our goal, so perhaps sufficient incentives and disincentives could be used to reduce risk to acceptable levels. What is acceptable? That is highly subjective. As was pointed out in the Corbett Report video, that also changes over time as it did for social security numbers which were not to be used for ID purposes but look at their use today.
Many do confuse privacy and anonymity, that's quite true. They are very distinct and different. At our current place in history where many people and organizations exist that wish to do harm to others, both privacy and anonymity are useful tools.I don't see that changing in my lifetime.
Ideally the association between the name of each delegate and the IP address of the delegate servers they operate should be very difficult to determine. Knowledge of such associations provides additional attack vectors and weakens security. Does the VPS host company need to know the real world identity of the operator? Wouldn't it be better for the delegates and for the security of BitShares if the delegates were anonymous VPS users that pay their bill on time?
That is the simple fact. No technological tools can be used to guarantee information only goes to the people you give explicit permission to have it. It leaks.
The operative word above is only applicable at the information's source, anywhere else you rely on the subjective actions and judgements of others and thus you loose the absolute guarantee.
Cryptography, smart contracts, reputation, and consequences. Of course you will have some people who don't care or who will choose to sarifice some property they own in order to release your information to the public. The point is that they would have to put something at risk which to you is of equal value to the information they possess.
This can all be mapped out through game theory. As long as there is equal risk taken by Alice and Bob then it would be mutual destruction if either one breaches the contract. On the other hand if Alice or Bob determine they don't care about what they would lose by being untrustworthy then you would have a point and all the game theory breaks down.
So ultimately you want the person who possesses your information to have more to lose by leaking it than by not leaking it. A security deposit is one way to do it so that they lose a lot of money if it can be proved that they leaked it but this might not be effective in a lot of cases. Reputation is also important to people and if a person has a reputation for leaking secrets they'll never again be trusted to keep secrets.
It's not perfect but I would think you'd have about as much ability to trust people decentralized as you do centralized, by similar mechanisms. As long as the average person has more to lose by breaking the contract than by following it. Isn't that why we trust delegates? The delegates have more to lose by not being honest than by being honest and as a result most delegates will choose to be honest for as long as possible.
So the question is, would you rather trust individuals you hand picked or trust random strangers working for big corporations or in government? I think at least when you pick them you can have more control than you have right now. You don't get to pick them right now.
Sure, if I pick ALL of my agents I would have the best possible (tho not absolute) control over information leakage. Ultimately the information must be put to use, be it my address, date of birth, income or whatever. Until automated vehicles can deliver my packages a human I didn't explicitly grant access to will need to know my address. That's the weak link in access control, where the info is "in the clear".
Whether identity registration would be best centralized vs. decentralized is a matter of how the information will be used and how access to it is controlled. I can see benefits to both approaches.