Author Topic: People who know what a DHT is...tell me if you think this is possible  (Read 2763 times)

0 Members and 1 Guest are viewing this topic.

Offline VoR0220

Code: [Select]
Are you still waiting for your POW transaction to confirm,
or are you already discussing sub-10 secs block confirmation times?

Yes. We are discussing that. I'd prefer we do it securely.
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline xeroc

  • Board Moderator
  • Hero Member
  • *****
  • Posts: 12922
  • ChainSquad GmbH
    • View Profile
    • ChainSquad GmbH
  • BitShares: xeroc
  • GitHub: xeroc
Code: [Select]
Are you still waiting for your POW transaction to confirm,
or are you already discussing sub-10 secs block confirmation times?

Offline VoR0220

No, I think the best case scenario will be around 3 or 4 seconds. It all depends on how quickly the database commitment process can be done. In other words, how much can it be done in parallel, and even if it is mostly serial, are modern processors fast enough to consistently generate that root hash in less than a second (meaning even for very heavy database modifications). If so, and assuming each block contains enough signatures from a (super) majority of the witnesses, either through including a signature from each witness in every block or using a threshold signature scheme to reduce block chain bloat (although when we are at the scale of 100K TPS on average, an extra 100 signatures in every block is negligible), then I think it can be brought down to 3 block intervals (i.e. 3 seconds).

well....that's a damn shame. It seems I may have obsessed over this proposal for nothing. Then again, there may be further applications for this idea...if we wanted to resurrect DNS in particular.

Quote
Also, I think there is plenty of incentive for the wallet hosts to run their full nodes (they get payment from their customers directly and/or from customer referrals). And there is of course enough incentive for the active witnesses to run full nodes since they get paid by the blockchain. But I do worry a little about all the other full nodes we would like to have participating in the network. Not the least of which are all those standby witnesses we want to make sure are ready to go at moments notice.


How are they incentivized by being paid by the blockchain? Why couldn't they just do this through a web wallet and refer people to the network? Is there some added benefit to running a full node in this scenario?
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline BunkerChainLabs-DataSecurityNode

+-+-+-+-+-+-+-+-+-+-+
www.Peerplays.com | Decentralized Gaming Built with Graphene - Now with BookiePro and Sweeps!
+-+-+-+-+-+-+-+-+-+-+

Offline arhag

  • Hero Member
  • *****
  • Posts: 1214
    • View Profile
    • My posts on Steem
  • BitShares: arhag
  • GitHub: arhag
You don't think it is at all possible to bring the confirmation down to 2 seconds tops in the 2.0 setup?

No, I think the best case scenario will be around 3 or 4 seconds. It all depends on how quickly the database commitment process can be done. In other words, how much can it be done in parallel, and even if it is mostly serial, are modern processors fast enough to consistently generate that root hash in less than a second (meaning even for very heavy database modifications). If so, and assuming each block contains enough signatures from a (super) majority of the witnesses, either through including a signature from each witness in every block or using a threshold signature scheme to reduce block chain bloat (although when we are at the scale of 100K TPS on average, an extra 100 signatures in every block is negligible), then I think it can be brought down to 3 block intervals (i.e. 3 seconds).

My concern with the web wallet approach is the single point of failure problem and the lack of incentive to run a full node.

Well my concern of single points of failure is a single point of trust failure, which my solution helps solve with regards to wallet hosts. I am not worried about the centralized wallet host having enough redundancy to keep their service operating. And worse case scenario, users can have different wallets from different providers accessing the same account.

Also, I think there is plenty of incentive for the wallet hosts to run their full nodes (they get payment from their customers directly and/or from customer referrals). And there is of course enough incentive for the active witnesses to run full nodes since they get paid by the blockchain. But I do worry a little about all the other full nodes we would like to have participating in the network. Not the least of which are all those standby witnesses we want to make sure are ready to go at moments notice.

Offline VoR0220

You don't think it is at all possible to bring the confirmation down to 2 seconds tops in the 2.0 setup? I agree with your analysis about the witnesses needing to be known which does create problems. I was just thinking about ways of how to alleviate that and I think your answers are excellent solutions. My concern with the web wallet approach is the single point of failure problem and the lack of incentive to run a full node. I believe there should be some kind of process involving full nodes that aren't witnesses/delegates/workers as it contributes to the security of the network. In any I do agree we need to implement the security deposit system. I've seen similar proposals to deal with the "nothing at stake" problem and I 100% agree that a deposit is a perfect way to eliminate/alleviate that problem.
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline arhag

  • Hero Member
  • *****
  • Posts: 1214
    • View Profile
    • My posts on Steem
  • BitShares: arhag
  • GitHub: arhag
tl;dr: We need a way for lightweight clients to keep track of changes to the set of active witnesses by relying on the chain of trust of a large enough quorum of previous active witnesses and syncing regularly. I describe a process that can do this but it has failure modes (hopefully rare) of two types: one where the user knows it failed, in which case they can rely on their social network of trust to get a trusted checkpoint to get back in sync; and even worse, a second mode where the user doesn't know it failed and is therefore at risk to a double spend attack but only through a collusion of their wallet host and the majority of the old active witnesses that the lightweight client was last aware prior to the syncing process failing. Then we need additional changes to the blockchain protocol to enable witnesses to make commitments of the entire database after each block to a single root hash which will be included in the block headers some fixed number of blocks later (this delay prevents the commitment process from increasing block intervals) as well as to allow any full node to spot check these root hashes and get financially rewarded for pointing out bad root hashes recently signed off by witnesses. With all of these blockchain protocol modifications in place, we can then allow any full node (such as the lightweight wallet host server) to independently provide small proofs about the state of the blockchain/database as of 1 minute or more in past to the lightweight clients which can verify them assuming their lightweight syncing process hasn't been caught in a failure mode. This means that normally if the lightweight client waits approximately 1 minute (this is assuming 101 witnesses and 1 second block intervals, but could in theory be reduced further through some other changes to the blockchain protocol) then they are very likely to be protected against double spend attacks or other attacks that require the victim to not know the true state of the blockchain/database.

The most important thing to keep in mind is that without a full blockchain to validate, a user cannot know in a trustless way how the votes authentically change from maintenance interval to maintenance interval and therefore cannot not know who the new set of N witnesses need to be over time. If you don't know who the N witnesses  should be, you have no root of trust for whatever scheme you come up with.

So the first thing to do is to provide a low-trust compromised solution that is applicable for lightweight nodes. By low-trust I mean that you cannot know for sure who the set of N witnesses should be, but you can have pretty high confidence that your set hasn't diverged from the true set. If we can rely on a quorum of the original witnesses to sign off on the set of the new witnesses (especially in a way that they can be provably fired, assuming they are still witnesses, if they lie), then you can follow this chain of set updates to the present. The problem is that if you haven't synced for a long time, it is possible that enough of the old set of witnesses have lost their position and therefore have nothing to lose to collude and trick you. This is true even in the case of a full node, but with a full node there are more metrics one can use to figure out they are likely on a fake chain (the same few witnesses consistently missing blocks for a long time without being voted out, and a lack of transaction activity from well known big players on the blockchain). Nevertheless, it is good enough especially if you sync the lightweight client somewhat frequently because the likelihood of enough of the witnesses being removed since the last time you synced is low.

To make the above possible in a way that witnesses can be punished for lying and with the client only needing to download block headers, it is necessary to somehow commit to the current set of witnesses in the block header. I have discussed an earlier proposal along these lines here. The main idea is that the protocol requires the block header to hold a hash of some data describing who the current active witnesses and that also includes their corresponding signing keys in that data (it can also included any important blockchain parameter changes that are relevant for the lightweight node's syncing procedure). The block header only stores the single hash (this is in addition to the Merkle root of the block's transactions) but if someone trusts that block header they can verify the authenticity of the data provided to them. While a user's client is resyncing, if it receives a seemingly valid chain of block headers from its last trusted block header to one after the last trusted block header that is at the start of the oldest maintenance period in which the new set of witnesses is equivalent to the one in the present maintenance period (assuming the set of witnesses has in fact changed since the last trusted block), the client must verify that block can be (and in fact has been) signed off by enough of current witnesses (as claimed for the current maintenance period and authenticated by that block header) who were also witnesses in the old maintenance period of the last trusted block. Perhaps a choice of "enough" is if 80% of the active witnesses of the maintenance period of the last trusted block are still active witnesses of the current maintenance period and have all added signed blocks to the block in question. This way if you trusted the set of witnesses at the time of the last trusted block (and the vast majority of them are still active witnesses in reality in the present, although your client has no way of knowing that itself), you can be pretty sure that the block in question is a valid block because otherwise there is a high likelihood that you can prove many current witnesses double signed blocks using only the block headers you downloaded (the client can also just keep the relevant subset of information from the block header that is only needed for this proof and also prune even this data when it gets old, and therefore most likely useless, in order to save space). The remaining block headers in the maintenance period (and other maintenance period after it with the same set of witnesses) can continue to be synced normally to keep up with the present, and that new most recent block header becomes the trusted block header for the next time syncing is necessary.

Of course the above procedure leaves open the possibility that if there is high witness turnover (especially if the user hasn't synced in a while) the block header syncing will fail. Also, even if the witnesses haven't changed in reality for months, the lightweight client cannot know that and so if the last resync was months ago it is probably prudent to just assume block header syncing isn't trustworthy. In that case the user is just going to need to get a trusted checkpoint of a recent block from some source and add that to their lighweight client so they can resume sync.

Now with all of that said, let's assume the problems of updating the set of witnesses to the present set and thus automatically keeping up with the latest block headers with minimal trust have been solved. The next step as far as lightweight client validation goes, is to be able to prove facts about the state of the current database (not just blockchain). This requires further changes in the protocol so that the database state can regularly be committed as a root hash into the block header (in such a way that efficient log(N) proofs become possible). All witnesses do this and check the root hash of the blocks other witnesses produce to determine if the block is valid. Keep in mind the root hash in a block will be of the database state as of K blocks ago, where K is some small positive integer, in order to not add any delays to block generation. By building on such valid block each witness signs off on the hash of the database state submitted by other witnesses earlier. After 51 blocks have been added to a given block (in the case of 101 witnesses), the database state committed in that block (which was of the database as of K blocks prior to that block) has been validated by the majority of witnesses and can be trusted as the valid root hash by the lightweight client. So if K < 8, and we have 1 second block intervals, that means a lightweight client can have a (most likely valid) root hash of the database state at a given time in less than a minute later. Then any full node (such as the lightweight client host) can provide a log(N) proof (in size and computation complexity to verify the proof, where N is the number of objects in the database) of the existence (and in some cases even non-existence) of a particular object in the database.

The process of building this root hash may be a bit heavy, but it can be done asynchronously. First, the witness node that is signing blocks wouldn't actually do it. They would have multiple parallel nodes that are also keeping in sync with the blockchain and database updates that occasionally freeze on a specific block number, do the root hash calculation in parallel, send the result to the witness node (who trusts it because the computers are communicating securely and are run by the same owner), and then resumes quickly syncing the database state back to the present only to stop again some time in the future. The blockchain syncing (without verification since the shared blockchain has already been validated) should always be faster on modern CPUs then what the protocol is designed to support, otherwise it would be impossible for anyone to catch up with the present. So even if the root hash computation takes quite a few blocks (I don't think it will take much), the node can always catch up. However, this means that each node can only do the root hash computation for 1 out of every L blocks. But by running multiple such nodes in parallel, the overall collective system can produce root hashes for every single block, just with an L block delay. So the protocol sets the expected delay to K > L so that all witness nodes can handle this (just like how the protocol sets the block interval to a large enough time so that all witnesses can coordinate without missing too many blocks).

Furthermore, the protocol could be design so that it isn't necessary for all regular (non-witness) nodes to have all this extra computation capacity to keep up with the chain. We could make it so that the blocks are technically valid (no chain reorganization) even with an invalid root hash. However, since any one could independently prove whether the root hash is valid or not, it also makes it possible for anyone to be able to prove that all the witnesses that signed blocks adding to that inappropriate block (and the signer of the inappropriate block itself) didn't do their job properly and therefore should be fired. I can imagine a fraud claim transaction that allows a user to place a large amount of money (D) in a security deposit to go with their claim that a particular block has an invalid root hash. Only up to one such transaction could be included in a single block and only if there were no other fraud claim transactions included in the K blocks prior. The Mth block (M > K) after the block with that transaction is then either: 1) produced by the witness normally expected as if that transaction didn't happen (in which case the fraud claim included M blocks prior is illegitimate and the security deposit is automatically sent to the reserve pool); or 2) produced by the next witness as if all the witnesses that signed the blocks from the block with the bad root hash (inclusive) to the Mth block (exclusive) were banned and a new early maintenance period reorganization was called. In the 2nd case, the block chain continues with those bad witnesses actually banned (that could in theory be all of the original 101 witnesses, who would be replaced by the next 101 standby witnesses that then become active) meaning they also lose their security deposit, and the user who submitted the fraud claim transaction gets the security deposit back along with some extra fixed reward (smaller than the security deposit of a single witness) taken from the security deposits of the banned witnesses. Full nodes will know which of the two paths to take as the legitimate path forward for the blockchain because they will have had enough time to do the root hash calculation by then and determine whether the fraud claim was legitimate it or not. (Lightweight nodes cannot know this and if enough of the witnesses become banned as a result of this they will require a trusted checkpoint to get back in sync with the right blockchain. However, this fraud claim should ideally never be used since its purpose is to motivate the witnesses to behave well and only include valid root hashes.)

Because a user making a false claim of a bad root hash will lose their security deposit (which should be a large amount), we don't have to worry about a denial of service attack of false claims trying to increasing validation burdens on unequipped full nodes. The full nodes in practice only need enough extra computation capacity to do one root hash calculation at a time within the specified M block time frame in order to keep up with the network according to the protocol. Furthermore, this excess capacity can be put to use normally by randomly picking blocks to spot check. While each full node (other than the witness nodes) will likely not bother having enough computer capacity to check the root hash of all blocks, if we add random variations to the choice of the block root hash to spot check, all of the blocks' root hashes should very likely be verified by the full nodes collectively. And since anyone who has proved a root hash is invalid can make good no-risk profit assuming they have enough initial capital for the security deposit and it only takes one full node with enough capital to submit the fraud claim transaction to warn everyone else, it is highly likely that any invalid root hashes will be quickly found.

A couple of other things to note about the fraud claim. Often other honest witnesses would submit the claim since they have to compute the root hash of every block anyway and so they would find out a block has an invalid root hash first. This would be a nice profit for them and they would be getting rid of dishonest witnesses making the network more secure. However, they aren't going to build on the blockchain with an invalid root hash because then they will get banned too. So the system should be designed so that in addition to pointing to a block in the blockchain with an invalid root hash, the fraud claim transaction can include the relevant information that proves an active witness signed off on a block with an invalid root hash. This could be as simple as including the signed block header of the blockchain fork that the bad witness created, but we should restrict this a one-block fork so that the verifiers of the fraud claim don't have to do much work and so that we guarantee the database state that they need to compute the root hash for is always as of a block in the legitimate blockchain (since K > 1) which is a database state that the verifiers had to compute at some point anyway. Second, there needs to be a somewhat short expiration period before a block cannot be used in a fraud claim. So if the block with the supposedly invalid root hash (or the block that the block with the invalid root hash forks off from) is older than P blocks, the fraud claim transaction is invalid. This is to ensure that the full nodes can run a root hash calculation at any time (to verify a fraud claim) without needing a copy of the database state after every block.

The way I see this working is that there would be three processes each with their own copy of a database in memory running on full nodes. They would both be working on the same blockchain (that is validated only once) but with a P+K+1 block delay between two of them and the third somewhere in between the two. The first would be the leader that is operating like a normal node. The last would always be artificially slowed down to maintain a position P+K+1 blocks behind the first. The middle one would be lagging behind the first with occasional pauses on random blocks to do a spot check of the root hash included in that block. Despite the regular pauses it will always be ahead of the last but behind the first. If a verification becomes necessary of a block, the leader would immediately tell the last one which block they needed to calculate a root hash for (which should always be at least one block ahead of their current position). The middle one would disable root hash spot checking temporarily to free up computing resources for the last one to do it on the database state that actually mattered. The last one would sync up to the block in question, stop there, do the root hash calculation, send it back to the leader, and then resume syncing from there (also the middle one would resume spot checks as well). This process only adds two extra processes to a normal node each with their own in-memory database. Since they can all be using the same shared trusted blockchain, there is no extra signature/transaction validation burden added (just extra database updating burden). These two extra processes can of course each run on separate machines that are separate from the leading process (the server holding the trusted blockchain just needs to supply the blocks over the local network to each machine on request).

Finally, if the majority of the witnesses are in fact colluding, they will of course not want to include a fraud claim transaction that they know will get them banned. But because this valid transaction will be circulating on the network, and because if the witnesses are doing nothing wrong they would just include the transaction because it is easy money for the network, the full nodes can assume that the censorship of that transaction is evidence that the majority of witnesses are corrupt. Then the community can proceed with whatever back up plan they have to take back control of the network when the majority of active witnesses are corrupt (hard fork, or perhaps some other future special built-in mechanism we devise for these situations). To try to mask censorship, the colluding witnesses might instead include a different (false) fraud claim transaction instead of the valid one that could get them removed. This way the protocol prevents them from including the valid fraud claim transaction since they can only include one per every K blocks and they can have an excuse for why that transaction wasn't included other than censorship. If that valid fraud claim transaction circulating in the network still isn't valid K blocks later, they can continue including additional (false) fraud claim transactions every K blocks until that one (and all others like it) becomes invalid because the block with the bad root hash is more than P blocks in the past. In this way the colluding majority witnesses can get away with including invalid root hashes without getting banned.

Fortunately, there are two problems with that attack plan. It would be unusual for there to be that many false consecutive fraud claim transactions, so full node operators would likely want to check whether some of the other fraud claim transactions floating around in the network that are not being included are any valid. The witnesses are limited with the number of unique fraud claim transactions they can produce (uniqueness depending on the balance that pays the security deposit) because they have a limited amount of money. So after filtering for uniqueness, the full nodes could randomly choose some of the fraud claim transactions floating around in the network and have a good probability of getting one of the true fraud claim transactions rather than more of the colluding witnesses false ones. Once enough of the full nodes have been able to verify that in fact the witnesses provided a fake root hash, they can try to vote them out and spread word to others. If the witnesses then resort to censoring vote change transactions, more users become aware of the nature of a problem and the community starts considering their other options like a hard fork. But the other problem with the attack plan makes this happening less likely. In order to maintain power while providing fake root hashes against all the other honest (and greedy) nodes trying to get their valid fraud claim transaction in, the colluding witnesses need to keep submitting false fraud claim transactions along with their full deposit each time. And because they are false, they will lose their security deposit every time. So this attack is very expensive. Even if they just wanted to include fake root hashes for a very short period of time (P blocks or less) to trick some lightweight client users once (probably in collusion with their wallet host providers) and make some money from double-spend attacks, they would still need lose at least D*P/K funds in security deposits to carry out this attack. But these colluding majority witnesses could have carried out such an attack against lightweight client users using double signing if they were willing to lose their witness security deposits (see my witness surety bond proposal) and their future income earning potential (let's estimate opportunity cost of all the active witness losing their job as O) due to the fact that their victims would eventually submit the double sign proof that gets them banned. So if the security deposit in aggregate for all the active witnesses is C (and let's assuming roughly half of the active witnesses are needed to carry out the attack), then the double sign attack is preferable to the fake root hash attack if (C+O)/2 < D*P/K. In other words as long as we make sure P > K*(C+O)/(2*D), the attack vector described in the previous paragraph does not make any economic sense since a better option exists (again assuming the witnesses also collude with the victim's wallet host providers). I will assume O is less than C, which means the previous inequality is satisfied if P > K*C/D. Considering that the fixed reward R a false claim transaction gets if valid is supposed to be less than the security deposit of a single witness, that means that R < C/N where N is the number of active witnesses. Even though a valid false claim transaction is virtually risk free, the reward needs to be comparable to the deposit to be worth it (particularly to be worth having enough liquid funds sitting around to use for this purpose at moment's notice). Let's assume a reward equal to the deposit amount is sufficient to motivate enough full nodes to provide false claim transactions and that the reward amount is half the security deposit of a single witness (R = C/(2*N)), then to be able to provide such a reward amount and make this attack economically irrational, we require that P > K*C/(C/(2*N)) = 2*K*N. So with N = 101 and assuming we can get K down to 5, a value of P = 1011 will be sufficient. That would mean full nodes will need a client that is lagging behind the present by P+K+1 = 1017 blocks (or roughly by 17 minutes assuming 1 second block intervals).

So with the blockchain protocol enhancements described above, I believe it is possible to provide pretty good security to lightweight clients. It won't be trust free like with full nodes. If their wallet host colludes with the majority of the witnesses they can still be tricked into believing something about the state of the database that is not true (which puts them at risk of a double spend attack). But they also know that their lightweight client will have enough information stored locally to generate a proof that will severely punish the colluding witnesses economically assuming the victim discovers they were scammed and publishes the proof in less than a week after being scammed AND they weren't tricked into believing a set of witnesses were active at the time they were scammed that in reality weren't actually active (above I discussed the method to reduce the likelihood of that happening) AND the community can come to a consensus within another week about the fact that the proof that would get the witnesses banned is being censored by the majority of the witnesses and with the prescription that they treat the network as invalid to do any further business on until they can hard fork from a snapshot as of the head block at the time they had reached said consensus (this third admittedly strict requirement won't be necessary if the colluding witnesses are a minority of the active witnesses or even in the majority case if we were to include some other blockchain protocol change that allows standby witnesses and any remaining unbanned witnesses to force a maintenance period update early if they include a valid proof that bans a majority of the witnesses). This makes it unlikely that the witnesses would collude to trick the user like this in the first place. The lightweight client users won't get tricked about the state of the database because their wallet host can provide a small proof about the validity of some subset of the state of the database as of a minute or more in the past (the wallet host can do this without requiring personal cooperation from the witnesses) which the lightweight client can verify for the user. That means if the user is willing to wait for around a minute (assuming 101 witnesses and 1 second blocks intervals) for confirmation before continuing with a irreversible transaction, they should be unlikely to fall victim to a double spend attack using the scheme that I described above. I have other ways of reducing this confirmation time even more (to approximately 10 seconds) but it requires some additional complexity in the blockchain protocol AND either including all N active witnesses signatures in each block header rather than just 1 (thus bloating the blockchain more) or using threshold signatures instead to avoid the blockchain bloat (but requires all but 1 of the active witnesses to participate in the threshold signature generation process, meaning greater than 99% witness participation with N = 101, or else the confirmation time gracefully degrades from 10 seconds to 1 minute).

Offline VoR0220

So I was contemplating the mumble session we had earlier today. I had asked a question regarding SPV (simplified payment verification...akin to that of a light wallet in Bitcoin) and if it was at all possible. The answer I got was that in the traditional sense (that being a trustless sense as SPV confirms based on a handful of block headers), no it is not. You will have to read from a central server website, the idea being that replication will ideally create the web of trust we are seeking here. However, there is something to be said for 'trustless' solutions. So a thought occurred to me. The witness nodes when thought of in the abstract as they are seem to form something of a ring...a ring that could be utilized as a distributed hash table methinks. So here's my proposal for a new operation in the blockchain to enable TRUE SPV.

Why not have a DHT integrated into the witness ring so that a light client could merely tap the DHT network and see the last N block headers (where N is also the number of witnesses). This could be supplemented for speed purposes by a full node broadcasting the block (we have to incentivize people to still run full nodes for the sake of security...why not create a role like witness in training or make it a worker section?) If we could find a way to prove that the "witness in training" delivered a packet of blockchain successfully to the light wallet, you may also have another way to generate economic gains to those simply running a full node. Granted that is going to be hard, but it is definitely something to think about (perhaps some form of a ranking system?)

One of the biggest problems with a DHT is the problem of Sybil attacks...something that the blockchain has effectively solved the problem of. One way we defend against sybil attacks is using the exact IDs that we have recorded in the blockchain as certified IDs. You aren't registered to the blockchain, you won't be delivered any data. Another is to find some way of implementing a proof of work, such that if a node reports that you delivered data that was incorrect (we see a node trying to fork the state), then we look for who delivered that data (not sure how exactly we do that outside of a ranking system) and can either a) impose a fee, b) kick them out of the DHT, or both. Kicking them out seems a bit harsh however, as an attacker could try to create false data and say that a full node did it, unjustly penalizing them....perhaps something involving the signing key? Another way is for a delegate/worker to administer a reverse turing test....think something like a CAPTCHA...only useful to the blockchain. Yet another way would be to simply have them pay to use the light wallet service (something like a small micropayment to access the DHT and make GET requests).

To me, this seems like it has even greater applications when we think of resurrecting DNS, content distribution, and reducing blockchain bloat. What do you guys think? Am I crazy or can this thing work?

https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads