Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - arhag

Pages: 1 2 3 4 5 [6] 7 8 9 10 11 12 13 ... 81
76
A new node coming on line can get "stuck" on a fork created by 100% collusion of former witnesses and requires manual intervention of specifying a checkpoint to get off that fork.

In the event that 100% of former witnesses collude then the blockchain will require 1 checkpoint every 1000 blocks until the attackers are voted out to prevent new nodes from accidentally getting stuck on a fork created by the attackers.

Thank you for your responses.

But I am still interested in better understanding the client logic and how it determines it is stuck. When does the client determine it is stuck? If it receives a full round of 101 (fake) blocks after the (yet to be known) fork point prior to receiving 90 real blocks (signed by a subset of the same witnesses who signed the 101 fake blocks) after the fork point, does it then cause the client to realize the fork and go back and get "stuck" at the fork point (and thus warn the user it is stuck)? I presume so because the fork point is less than 1000 blocks in the past.

Now if the attacker is able to control the victims network connection for long enough (more than 1000 blocks) to prevent the real chain from appearing, the victim will continue on thinking everything is fine on the fake chain. When the attacker gives up control of the victim's network connection, the victim is then able to see all the real blocks flooding into the system. But by the this point the fork point is more than 1000 blocks older than the present, so it does not do a chain reorganization nor get stuck, correct? In fact, the client happily continues on in the fake chain thinking everything is fine until one day the victim realizes he isn't receiving transactions from a user on the real chain or that his transactions to a user on a real chain are not being received, at which point he discovers that there is problem and ends up requiring a trusted checkpoint (and probably ends up realizing he lost money from a double spend attack at that point).


In theory we could add a second metric for measuring the best blockchain via TAPOS which would be "total coindays approved".

It sounds like another interesting statistic that can be learned from the blockchain but I'm not very interested in that as part of the protocol because it makes it harder to reason about and I worry it can potentially add new attack vectors. In the case of a stuck blockchain, we could have each specific user's full client (this problem doesn't even apply for lightweight clients since those depend on trust in the wallet provider anyway) offer to reindex the chain from the last trusted checkpoint to calculate the total coindays approved since then and also other valuable statistics, rather than burdening that task (as small as it may be) on all full nodes by making it part of the protocol. The user could then use this information to pick a particular chain themselves if they wanted, although it would still be better for them to just rely on their social network for a trusted checkpoint in these rare cases.

77
If the are preventing people from voting then the minority needs to stop producing and users need to stop transacting on majority chain.  Everyone would pick a minority fork and vote out the bad witnesses.  At this point the minority fork would have 100% block production and the old majority fork will still be stumbling along at 50 to 90% corrupt-witness participation.   Eventually the minority fork will overtake it and become the longest chain.

Everyone "picking" the minority fork is not an automatic process done by code. It needs to be selected by each user presumably through a set of blacklisted witnesses that everyone else has agreed to use (to make sure everyone gets on the same fork). If that is the case, why does it even matter if the "minority fork" ever becomes longer than the original "majority chain". It just needs to vote out the bad witnesses so that it can back to normal high witness participation.


Technically it is possible for the majority of stakeholders to collude to fracture the network and that is what most of your "attacks" are assuming: that the same users will vote differently on different forks and that they will not work to reach consensus.    I think this is not a realistic case.

I'm not trying to argue what the probability of any particular event occurring is. I am not trying to say there aren't ways of resolving issues that arise in the rare events that they do. All I am trying to understand at this stage is all the possible ways attacks can force the system into a state that requires manual action (such as relying on social networks to gain consensus) because the protocol cannot automatically resolve it.

For example, you still haven't answered what happens with Alice in the situation described in my earlier post. My assumption from what you have said so far is that her client will pick the fake chain because it is longer without giving her any warning that something weird is going on with the blockchain. Is that correct or would the client recognize the two possible blockchains forking at block B2000? It is confusing because if the fake chain was released into the wild after Alice already synced, I presume she would stay on the real chain because of the 1000 block limit of the undo history. So it seems the results of which chain her client ends up on seem to be dependent on timing of when her client syncs relative to when blocks are received from the network (is that correct?). Again, I'm not saying this is necessarily an attack, I just want to understand what happens, what the client is aware of, and what the client warns to the user.

Let me rephrase the problem. Let's say I was an active witness 3 months ago along with 100 of my friends. By now, we have all been voted out and replaced by 101 new witnesses that are doing a good job but on average have a 99% witness participation rate. My friends and I decide to all get together to produce a fake blockchain forking off from the point where we were all active witnesses 3 months ago to the present, and we distribute this fake blockchain on the peer-to-peer network. What does this do to the full nodes currently online and active on the network (I would guess nothing)? Now what if someone with a full node that hasn't synced with the network for 4 months decides to finally resync. What happens to that full node?

78
There are certainly a lot of edge cases that we need to carefully document and I think we can vaguely categorize them by the percentage of colluding witnesses and the time horizon for the attack:

100% collusion of witnesses means they can block transactions that would vote them out.  The network would be held hostage to their demands until a hard-fork can be distributed. 

Given that reality, the case of witnesses turning corrupt *AFTER* they have been voted out is rather unlikely and far less effective; therefore, it is safe to assume it wouldn't happen.

Can you please answer what the clients would do according to the protocol in the scenario I described in my previous post? I first want to just understand the protocol well enough and understand what attacks are possible before considering the attack economics and the probability of attacks occurring or the feasibility of successfully carrying out a particular attack.

Also, I don't understand why you think it is unlikely for witnesses that have already been voted out to collude to create a fake blockchain. At that point they have nothing to lose, especially if their real identities were never revealed.


The network is "safe" against both of those attacks so long as 5% of the witnesses do not collude.  That 5% would be enough to allow the minority chain that had support of the stakeholders to elect new witnesses and eventually overtake the attackers chain which would be stuck at 95% participation assuming that in a healthy network participation averages over 95%.

I think this is a different scenario from the one I was discussing, but an interesting one nonetheless. I don't understand how the protocol allows a minority chain to recover. I understand that a minority chain could include transactions that change enough votes to vote out the other 95% of bad witnesses by the next maintenance interval, but how exactly do clients decide to switch to that chain given that it is a minority chain? What if there were multiple minority chains that elected a different set of witnesses? How do clients choose which one to adopt?


During the attack it is not safe to transact without PICKING a fork that you know will eventually win.  It would essentially be a "bank holiday" while votes are cast and a new consensus is reached.  Fortunately, everyone would be well aware of what was going on because the "minority" witnesses would indicate their intent to "fork" after the colluding witnesses start rejecting their otherwise valid blocks only because they vote against the colluding witnesses.   It would be an "open and shut" case and the network would likely recover in hours.

This is all very interesting to further explore in detail and try to work out the edge cases. But I would also like clarification on how much of what you are describing is part of the current protocol that either is already implemented or will be by the time BitShares 2.0 is released, and how much of this are just future ideas that could and probably should eventually be included as part of the protocol and implemented in code in the clients. In other words, if right now 95% of delegates started rejecting transactions that voted them out or voted others in, would BitShares automatically recover (and by automatically I am still assuming enough stakeholders actually sign and broadcast transactions that change their votes to new non-censoring delegates) without any work necessary from the developers and without trusted checkpoints?

79
1.  The blockchain always picks the longest chain signed by valid witnesses that doesn't have timestamps in the future.

When does the client choose a blockchain? Is it always evaluating all possible valid chains? Or rather is it always evaluating all possible valid chains other than ones that would cause it to reorganize from its current chain in a way such that the fork point of the change is older than 1000 blocks? I guess I need to explain that better (see the scenario I describe below).

2.  The blockchain maintains an "UNDO" history for the last 1000 blocks which means that if there is a fork it must be resolved within 1000 blocks or manual intervention will be required via a checkpoint.   With 1 second blocks this is about 16 minutes *IF* one block was produced every second, but if that were the case there couldn't be a fork.   So if w assume "worst case" a 50/50 split in the network, then it would take 32 minutes of un-interrupted hard-forking before manual intervention would be required.

Let's say Alice has the following blockchain stored locally: B1, B2, ..., B1500.
The real consensus blockchain at time T is: B1, B2, ..., B3168.
There is a fake blockchain floating around in the network which first appeared at time T and looks like this: B1, B2, ..., B2000, B2001', B2002', ..., B3200'.
The common fork point between this fake blockchain and the real consensus chain is B2000. After this point this blockchain diverges form the real blockchain. The witnesses (which I will call "fake witnesses" from this point forward) at the time of B2000 continue to be active in the fake blockchain whereas they were all voted out in the real chain and replaced by new witnesses (which I will call "real witnesses" from this point forward) at block B2100 (start of new maintenance interval). Let's assume all the significant votes that caused the "fake witnesses" to be replaced by the "real witnesses" appeared between blocks B2001 and B2099.
This fake blockchain is a valid chain signed by witnesses according to the rules of the protocol following that chain. At time T it is longer than the real blockchain by about 32 blocks because the "fake witnesses" were able to have 100% witness participation while the "real witnesses" were only able to maintain 99% witness participation over the last 1000 blocks. Therefore, by the longest chain rule, people should prefer the fake chain over the real chain, but because the fork point is more than 1000 blocks old at the time T when the fake blockchain appeared on the network, none of the nodes online at that time would be tricked onto that fake chain (am I correct in this understanding?).

Now right around this time (a little bit after time T), Alice connects with the network and starts syncing. She can see there are two blockchains that build off the chain she currently has stored locally and both seem to follow the protocol rules. Does she pick the real chain (B1, B2, ..., B3168), the fake chain (B1, B2, ..., B2000, B2001', B2002', ..., B3200'), does she get stuck at block B2000, or something else entirely? Is the outcome dependent on which blocks she receives first. In other words, what if she gets the blocks from the fake blockchain well before the blocks from the real one. Can she sync to the present timestamp and have her local blockchain become the fake chain, and then later when she gets the blocks from the real chain she rejects them because the fork point is more than 1000 blocks old from her new current head block? And similarly, if she got the blocks from the real blockchain well before the fake one, would she instead sync to the real blockchain and reject the fake one?

80
You don't think it is at all possible to bring the confirmation down to 2 seconds tops in the 2.0 setup?

No, I think the best case scenario will be around 3 or 4 seconds. It all depends on how quickly the database commitment process can be done. In other words, how much can it be done in parallel, and even if it is mostly serial, are modern processors fast enough to consistently generate that root hash in less than a second (meaning even for very heavy database modifications). If so, and assuming each block contains enough signatures from a (super) majority of the witnesses, either through including a signature from each witness in every block or using a threshold signature scheme to reduce block chain bloat (although when we are at the scale of 100K TPS on average, an extra 100 signatures in every block is negligible), then I think it can be brought down to 3 block intervals (i.e. 3 seconds).

My concern with the web wallet approach is the single point of failure problem and the lack of incentive to run a full node.

Well my concern of single points of failure is a single point of trust failure, which my solution helps solve with regards to wallet hosts. I am not worried about the centralized wallet host having enough redundancy to keep their service operating. And worse case scenario, users can have different wallets from different providers accessing the same account.

Also, I think there is plenty of incentive for the wallet hosts to run their full nodes (they get payment from their customers directly and/or from customer referrals). And there is of course enough incentive for the active witnesses to run full nodes since they get paid by the blockchain. But I do worry a little about all the other full nodes we would like to have participating in the network. Not the least of which are all those standby witnesses we want to make sure are ready to go at moments notice.

81
I'm looking for good documentation about the consensus protocol of DPOS (either BitShares 0.x or 2.0 is fine, but I would prefer the 2.0 one) and how nodes decide upon the particular blockchain to use. Specifically, I want to know everything about blockchain reorganization. When does a client, given some existing validated blockchain stored locally, accept a reorganization and when does it not? I want to understand all the possible failure modes we are aware of and how they can be achieved. How does the timing of which blocks are received in which order and by when (relative to the syncing process) change the outcome of which blockchain a client ends up choosing? Under what conditions can a full node get stuck and be unable to sync forward? Are all such cases bugs or are there cases in which this is the legitimate behavior? If is is the latter (which I believe to be the case), is the only way to fix the problem to add a trusted checkpoint to the client? If the problem can be fixed by resyncing the blockchain from scratch (which I have noticed had been recommended in the past as a solution to these stuck blockchain problems that had existed before) is that not an indicator of a bug in that particular case? In other words, if it wasn't a bug and got legitimately stuck, shouldn't it get stuck in the same place again (otherwise how is such non-deterministic behavior not a bug)?

82
tl;dr: We need a way for lightweight clients to keep track of changes to the set of active witnesses by relying on the chain of trust of a large enough quorum of previous active witnesses and syncing regularly. I describe a process that can do this but it has failure modes (hopefully rare) of two types: one where the user knows it failed, in which case they can rely on their social network of trust to get a trusted checkpoint to get back in sync; and even worse, a second mode where the user doesn't know it failed and is therefore at risk to a double spend attack but only through a collusion of their wallet host and the majority of the old active witnesses that the lightweight client was last aware prior to the syncing process failing. Then we need additional changes to the blockchain protocol to enable witnesses to make commitments of the entire database after each block to a single root hash which will be included in the block headers some fixed number of blocks later (this delay prevents the commitment process from increasing block intervals) as well as to allow any full node to spot check these root hashes and get financially rewarded for pointing out bad root hashes recently signed off by witnesses. With all of these blockchain protocol modifications in place, we can then allow any full node (such as the lightweight wallet host server) to independently provide small proofs about the state of the blockchain/database as of 1 minute or more in past to the lightweight clients which can verify them assuming their lightweight syncing process hasn't been caught in a failure mode. This means that normally if the lightweight client waits approximately 1 minute (this is assuming 101 witnesses and 1 second block intervals, but could in theory be reduced further through some other changes to the blockchain protocol) then they are very likely to be protected against double spend attacks or other attacks that require the victim to not know the true state of the blockchain/database.

The most important thing to keep in mind is that without a full blockchain to validate, a user cannot know in a trustless way how the votes authentically change from maintenance interval to maintenance interval and therefore cannot not know who the new set of N witnesses need to be over time. If you don't know who the N witnesses  should be, you have no root of trust for whatever scheme you come up with.

So the first thing to do is to provide a low-trust compromised solution that is applicable for lightweight nodes. By low-trust I mean that you cannot know for sure who the set of N witnesses should be, but you can have pretty high confidence that your set hasn't diverged from the true set. If we can rely on a quorum of the original witnesses to sign off on the set of the new witnesses (especially in a way that they can be provably fired, assuming they are still witnesses, if they lie), then you can follow this chain of set updates to the present. The problem is that if you haven't synced for a long time, it is possible that enough of the old set of witnesses have lost their position and therefore have nothing to lose to collude and trick you. This is true even in the case of a full node, but with a full node there are more metrics one can use to figure out they are likely on a fake chain (the same few witnesses consistently missing blocks for a long time without being voted out, and a lack of transaction activity from well known big players on the blockchain). Nevertheless, it is good enough especially if you sync the lightweight client somewhat frequently because the likelihood of enough of the witnesses being removed since the last time you synced is low.

To make the above possible in a way that witnesses can be punished for lying and with the client only needing to download block headers, it is necessary to somehow commit to the current set of witnesses in the block header. I have discussed an earlier proposal along these lines here. The main idea is that the protocol requires the block header to hold a hash of some data describing who the current active witnesses and that also includes their corresponding signing keys in that data (it can also included any important blockchain parameter changes that are relevant for the lightweight node's syncing procedure). The block header only stores the single hash (this is in addition to the Merkle root of the block's transactions) but if someone trusts that block header they can verify the authenticity of the data provided to them. While a user's client is resyncing, if it receives a seemingly valid chain of block headers from its last trusted block header to one after the last trusted block header that is at the start of the oldest maintenance period in which the new set of witnesses is equivalent to the one in the present maintenance period (assuming the set of witnesses has in fact changed since the last trusted block), the client must verify that block can be (and in fact has been) signed off by enough of current witnesses (as claimed for the current maintenance period and authenticated by that block header) who were also witnesses in the old maintenance period of the last trusted block. Perhaps a choice of "enough" is if 80% of the active witnesses of the maintenance period of the last trusted block are still active witnesses of the current maintenance period and have all added signed blocks to the block in question. This way if you trusted the set of witnesses at the time of the last trusted block (and the vast majority of them are still active witnesses in reality in the present, although your client has no way of knowing that itself), you can be pretty sure that the block in question is a valid block because otherwise there is a high likelihood that you can prove many current witnesses double signed blocks using only the block headers you downloaded (the client can also just keep the relevant subset of information from the block header that is only needed for this proof and also prune even this data when it gets old, and therefore most likely useless, in order to save space). The remaining block headers in the maintenance period (and other maintenance period after it with the same set of witnesses) can continue to be synced normally to keep up with the present, and that new most recent block header becomes the trusted block header for the next time syncing is necessary.

Of course the above procedure leaves open the possibility that if there is high witness turnover (especially if the user hasn't synced in a while) the block header syncing will fail. Also, even if the witnesses haven't changed in reality for months, the lightweight client cannot know that and so if the last resync was months ago it is probably prudent to just assume block header syncing isn't trustworthy. In that case the user is just going to need to get a trusted checkpoint of a recent block from some source and add that to their lighweight client so they can resume sync.

Now with all of that said, let's assume the problems of updating the set of witnesses to the present set and thus automatically keeping up with the latest block headers with minimal trust have been solved. The next step as far as lightweight client validation goes, is to be able to prove facts about the state of the current database (not just blockchain). This requires further changes in the protocol so that the database state can regularly be committed as a root hash into the block header (in such a way that efficient log(N) proofs become possible). All witnesses do this and check the root hash of the blocks other witnesses produce to determine if the block is valid. Keep in mind the root hash in a block will be of the database state as of K blocks ago, where K is some small positive integer, in order to not add any delays to block generation. By building on such valid block each witness signs off on the hash of the database state submitted by other witnesses earlier. After 51 blocks have been added to a given block (in the case of 101 witnesses), the database state committed in that block (which was of the database as of K blocks prior to that block) has been validated by the majority of witnesses and can be trusted as the valid root hash by the lightweight client. So if K < 8, and we have 1 second block intervals, that means a lightweight client can have a (most likely valid) root hash of the database state at a given time in less than a minute later. Then any full node (such as the lightweight client host) can provide a log(N) proof (in size and computation complexity to verify the proof, where N is the number of objects in the database) of the existence (and in some cases even non-existence) of a particular object in the database.

The process of building this root hash may be a bit heavy, but it can be done asynchronously. First, the witness node that is signing blocks wouldn't actually do it. They would have multiple parallel nodes that are also keeping in sync with the blockchain and database updates that occasionally freeze on a specific block number, do the root hash calculation in parallel, send the result to the witness node (who trusts it because the computers are communicating securely and are run by the same owner), and then resumes quickly syncing the database state back to the present only to stop again some time in the future. The blockchain syncing (without verification since the shared blockchain has already been validated) should always be faster on modern CPUs then what the protocol is designed to support, otherwise it would be impossible for anyone to catch up with the present. So even if the root hash computation takes quite a few blocks (I don't think it will take much), the node can always catch up. However, this means that each node can only do the root hash computation for 1 out of every L blocks. But by running multiple such nodes in parallel, the overall collective system can produce root hashes for every single block, just with an L block delay. So the protocol sets the expected delay to K > L so that all witness nodes can handle this (just like how the protocol sets the block interval to a large enough time so that all witnesses can coordinate without missing too many blocks).

Furthermore, the protocol could be design so that it isn't necessary for all regular (non-witness) nodes to have all this extra computation capacity to keep up with the chain. We could make it so that the blocks are technically valid (no chain reorganization) even with an invalid root hash. However, since any one could independently prove whether the root hash is valid or not, it also makes it possible for anyone to be able to prove that all the witnesses that signed blocks adding to that inappropriate block (and the signer of the inappropriate block itself) didn't do their job properly and therefore should be fired. I can imagine a fraud claim transaction that allows a user to place a large amount of money (D) in a security deposit to go with their claim that a particular block has an invalid root hash. Only up to one such transaction could be included in a single block and only if there were no other fraud claim transactions included in the K blocks prior. The Mth block (M > K) after the block with that transaction is then either: 1) produced by the witness normally expected as if that transaction didn't happen (in which case the fraud claim included M blocks prior is illegitimate and the security deposit is automatically sent to the reserve pool); or 2) produced by the next witness as if all the witnesses that signed the blocks from the block with the bad root hash (inclusive) to the Mth block (exclusive) were banned and a new early maintenance period reorganization was called. In the 2nd case, the block chain continues with those bad witnesses actually banned (that could in theory be all of the original 101 witnesses, who would be replaced by the next 101 standby witnesses that then become active) meaning they also lose their security deposit, and the user who submitted the fraud claim transaction gets the security deposit back along with some extra fixed reward (smaller than the security deposit of a single witness) taken from the security deposits of the banned witnesses. Full nodes will know which of the two paths to take as the legitimate path forward for the blockchain because they will have had enough time to do the root hash calculation by then and determine whether the fraud claim was legitimate it or not. (Lightweight nodes cannot know this and if enough of the witnesses become banned as a result of this they will require a trusted checkpoint to get back in sync with the right blockchain. However, this fraud claim should ideally never be used since its purpose is to motivate the witnesses to behave well and only include valid root hashes.)

Because a user making a false claim of a bad root hash will lose their security deposit (which should be a large amount), we don't have to worry about a denial of service attack of false claims trying to increasing validation burdens on unequipped full nodes. The full nodes in practice only need enough extra computation capacity to do one root hash calculation at a time within the specified M block time frame in order to keep up with the network according to the protocol. Furthermore, this excess capacity can be put to use normally by randomly picking blocks to spot check. While each full node (other than the witness nodes) will likely not bother having enough computer capacity to check the root hash of all blocks, if we add random variations to the choice of the block root hash to spot check, all of the blocks' root hashes should very likely be verified by the full nodes collectively. And since anyone who has proved a root hash is invalid can make good no-risk profit assuming they have enough initial capital for the security deposit and it only takes one full node with enough capital to submit the fraud claim transaction to warn everyone else, it is highly likely that any invalid root hashes will be quickly found.

A couple of other things to note about the fraud claim. Often other honest witnesses would submit the claim since they have to compute the root hash of every block anyway and so they would find out a block has an invalid root hash first. This would be a nice profit for them and they would be getting rid of dishonest witnesses making the network more secure. However, they aren't going to build on the blockchain with an invalid root hash because then they will get banned too. So the system should be designed so that in addition to pointing to a block in the blockchain with an invalid root hash, the fraud claim transaction can include the relevant information that proves an active witness signed off on a block with an invalid root hash. This could be as simple as including the signed block header of the blockchain fork that the bad witness created, but we should restrict this a one-block fork so that the verifiers of the fraud claim don't have to do much work and so that we guarantee the database state that they need to compute the root hash for is always as of a block in the legitimate blockchain (since K > 1) which is a database state that the verifiers had to compute at some point anyway. Second, there needs to be a somewhat short expiration period before a block cannot be used in a fraud claim. So if the block with the supposedly invalid root hash (or the block that the block with the invalid root hash forks off from) is older than P blocks, the fraud claim transaction is invalid. This is to ensure that the full nodes can run a root hash calculation at any time (to verify a fraud claim) without needing a copy of the database state after every block.

The way I see this working is that there would be three processes each with their own copy of a database in memory running on full nodes. They would both be working on the same blockchain (that is validated only once) but with a P+K+1 block delay between two of them and the third somewhere in between the two. The first would be the leader that is operating like a normal node. The last would always be artificially slowed down to maintain a position P+K+1 blocks behind the first. The middle one would be lagging behind the first with occasional pauses on random blocks to do a spot check of the root hash included in that block. Despite the regular pauses it will always be ahead of the last but behind the first. If a verification becomes necessary of a block, the leader would immediately tell the last one which block they needed to calculate a root hash for (which should always be at least one block ahead of their current position). The middle one would disable root hash spot checking temporarily to free up computing resources for the last one to do it on the database state that actually mattered. The last one would sync up to the block in question, stop there, do the root hash calculation, send it back to the leader, and then resume syncing from there (also the middle one would resume spot checks as well). This process only adds two extra processes to a normal node each with their own in-memory database. Since they can all be using the same shared trusted blockchain, there is no extra signature/transaction validation burden added (just extra database updating burden). These two extra processes can of course each run on separate machines that are separate from the leading process (the server holding the trusted blockchain just needs to supply the blocks over the local network to each machine on request).

Finally, if the majority of the witnesses are in fact colluding, they will of course not want to include a fraud claim transaction that they know will get them banned. But because this valid transaction will be circulating on the network, and because if the witnesses are doing nothing wrong they would just include the transaction because it is easy money for the network, the full nodes can assume that the censorship of that transaction is evidence that the majority of witnesses are corrupt. Then the community can proceed with whatever back up plan they have to take back control of the network when the majority of active witnesses are corrupt (hard fork, or perhaps some other future special built-in mechanism we devise for these situations). To try to mask censorship, the colluding witnesses might instead include a different (false) fraud claim transaction instead of the valid one that could get them removed. This way the protocol prevents them from including the valid fraud claim transaction since they can only include one per every K blocks and they can have an excuse for why that transaction wasn't included other than censorship. If that valid fraud claim transaction circulating in the network still isn't valid K blocks later, they can continue including additional (false) fraud claim transactions every K blocks until that one (and all others like it) becomes invalid because the block with the bad root hash is more than P blocks in the past. In this way the colluding majority witnesses can get away with including invalid root hashes without getting banned.

Fortunately, there are two problems with that attack plan. It would be unusual for there to be that many false consecutive fraud claim transactions, so full node operators would likely want to check whether some of the other fraud claim transactions floating around in the network that are not being included are any valid. The witnesses are limited with the number of unique fraud claim transactions they can produce (uniqueness depending on the balance that pays the security deposit) because they have a limited amount of money. So after filtering for uniqueness, the full nodes could randomly choose some of the fraud claim transactions floating around in the network and have a good probability of getting one of the true fraud claim transactions rather than more of the colluding witnesses false ones. Once enough of the full nodes have been able to verify that in fact the witnesses provided a fake root hash, they can try to vote them out and spread word to others. If the witnesses then resort to censoring vote change transactions, more users become aware of the nature of a problem and the community starts considering their other options like a hard fork. But the other problem with the attack plan makes this happening less likely. In order to maintain power while providing fake root hashes against all the other honest (and greedy) nodes trying to get their valid fraud claim transaction in, the colluding witnesses need to keep submitting false fraud claim transactions along with their full deposit each time. And because they are false, they will lose their security deposit every time. So this attack is very expensive. Even if they just wanted to include fake root hashes for a very short period of time (P blocks or less) to trick some lightweight client users once (probably in collusion with their wallet host providers) and make some money from double-spend attacks, they would still need lose at least D*P/K funds in security deposits to carry out this attack. But these colluding majority witnesses could have carried out such an attack against lightweight client users using double signing if they were willing to lose their witness security deposits (see my witness surety bond proposal) and their future income earning potential (let's estimate opportunity cost of all the active witness losing their job as O) due to the fact that their victims would eventually submit the double sign proof that gets them banned. So if the security deposit in aggregate for all the active witnesses is C (and let's assuming roughly half of the active witnesses are needed to carry out the attack), then the double sign attack is preferable to the fake root hash attack if (C+O)/2 < D*P/K. In other words as long as we make sure P > K*(C+O)/(2*D), the attack vector described in the previous paragraph does not make any economic sense since a better option exists (again assuming the witnesses also collude with the victim's wallet host providers). I will assume O is less than C, which means the previous inequality is satisfied if P > K*C/D. Considering that the fixed reward R a false claim transaction gets if valid is supposed to be less than the security deposit of a single witness, that means that R < C/N where N is the number of active witnesses. Even though a valid false claim transaction is virtually risk free, the reward needs to be comparable to the deposit to be worth it (particularly to be worth having enough liquid funds sitting around to use for this purpose at moment's notice). Let's assume a reward equal to the deposit amount is sufficient to motivate enough full nodes to provide false claim transactions and that the reward amount is half the security deposit of a single witness (R = C/(2*N)), then to be able to provide such a reward amount and make this attack economically irrational, we require that P > K*C/(C/(2*N)) = 2*K*N. So with N = 101 and assuming we can get K down to 5, a value of P = 1011 will be sufficient. That would mean full nodes will need a client that is lagging behind the present by P+K+1 = 1017 blocks (or roughly by 17 minutes assuming 1 second block intervals).

So with the blockchain protocol enhancements described above, I believe it is possible to provide pretty good security to lightweight clients. It won't be trust free like with full nodes. If their wallet host colludes with the majority of the witnesses they can still be tricked into believing something about the state of the database that is not true (which puts them at risk of a double spend attack). But they also know that their lightweight client will have enough information stored locally to generate a proof that will severely punish the colluding witnesses economically assuming the victim discovers they were scammed and publishes the proof in less than a week after being scammed AND they weren't tricked into believing a set of witnesses were active at the time they were scammed that in reality weren't actually active (above I discussed the method to reduce the likelihood of that happening) AND the community can come to a consensus within another week about the fact that the proof that would get the witnesses banned is being censored by the majority of the witnesses and with the prescription that they treat the network as invalid to do any further business on until they can hard fork from a snapshot as of the head block at the time they had reached said consensus (this third admittedly strict requirement won't be necessary if the colluding witnesses are a minority of the active witnesses or even in the majority case if we were to include some other blockchain protocol change that allows standby witnesses and any remaining unbanned witnesses to force a maintenance period update early if they include a valid proof that bans a majority of the witnesses). This makes it unlikely that the witnesses would collude to trick the user like this in the first place. The lightweight client users won't get tricked about the state of the database because their wallet host can provide a small proof about the validity of some subset of the state of the database as of a minute or more in the past (the wallet host can do this without requiring personal cooperation from the witnesses) which the lightweight client can verify for the user. That means if the user is willing to wait for around a minute (assuming 101 witnesses and 1 second blocks intervals) for confirmation before continuing with a irreversible transaction, they should be unlikely to fall victim to a double spend attack using the scheme that I described above. I have other ways of reducing this confirmation time even more (to approximately 10 seconds) but it requires some additional complexity in the blockchain protocol AND either including all N active witnesses signatures in each block header rather than just 1 (thus bloating the blockchain more) or using threshold signatures instead to avoid the blockchain bloat (but requires all but 1 of the active witnesses to participate in the threshold signature generation process, meaning greater than 99% witness participation with N = 101, or else the confirmation time gracefully degrades from 10 seconds to 1 minute).

83
I'm presuming you're saying that because 2.0 requires an issuer to back his MPA with some kind of collateral, or other insurance? ...Because if there is no cost to issuing an MPA, what's to stop an attacker just creating a bunch of them with the transaction fee set to some insignificant value, and just DDOS the network?

Let's say you want to submit a transaction that needs to pay a transaction fee of 30 BTS (according to the current fee schedule). This might be a transaction that transfers a UIA from your account to another account. Also, pretend your account does not have any BTS at all. If the UIA issuer has some BTS in the fee pool (at least 30 BTS necessary in this case), then it is possible for you to submit this transaction even without any BTS to pay the fee if you are willing to accept the issuer's exchange rate.

Let's pretend the market price of the UIA is 15 BTS/UIA. The issuer may have set a core_exchange_rate of 10 BTS/UIA instead to play it safe (account for market liquidity, volatility, management expenses, profit, etc.). So you need to pay a fee of at least 3 UIA for that transaction otherwise you will not be able to get enough BTS (specifically 30 BTS) from the UIA's fee pool to pay the transaction fee. So let's say you submit the transaction that withdraws 103 UIA, sends 100 UIA to the recipient, and pays 3 UIA as a fee which is converted via the fee pool into 30 BTS (20% of which goes to the reserve pool and the remaining 80% goes to your registrar/referrer). What happens with the conversion is that the 3 UIA is deposited into the issuer's account and 30 BTS is withdrawn from the fee pool.

Now the issuer can go into the market and exchange the 3 UIA for BTS. The market price is 15 BTS/UIA, but because of liquidity let's pretend the issuer only gets 36 BTS. The issuer can then put 30 BTS back into the fee pool to top it off and take the remaining 6 BTS as profit.

84
Right now, it's possible to DDOS the network in exactly the same way as bitcoin is suffering.

All you need do is create an MPA with very low value, get it voted in (admittedly, the hard part), then create millions of transactions for it - the fee is paid only in MPA units, so it will be a very low cost attack.

Umm, I don't think so. In 0.x I believe you need to pay 2 times the BTS fee converted into the MPA units using the median price feed (although this MPA fee goes into the yield pool rather than to the delegates).

I stand corrected! Presumably there is some logic to protect against a price feed of 0?

Hmm, don't know. But even if so I doubt there would be any logic against a really small number close to zero. The risk of that should be low because that would require the majority of the delegates to want to intentionally spam the network (and we've got bigger problems if they want to do that). Besides you can just vote them out if they do such a foolish thing.

Anyway, it doesn't matter with 2.0 because the fee pool method is superior. An issuer would be crazy to set a core_exchange_rate well below market rate because they would then essentially be throwing their money away and eventually run out.

85
Right now, it's possible to DDOS the network in exactly the same way as bitcoin is suffering.

All you need do is create an MPA with very low value, get it voted in (admittedly, the hard part), then create millions of transactions for it - the fee is paid only in MPA units, so it will be a very low cost attack.

Umm, I don't think so. In 0.x I believe you need to pay 2 times the BTS fee converted into the MPA units using the median price feed (although this MPA fee goes into the yield pool rather than to the delegates). In 2.0 the solution is more elegant. The minimum fee paid needs to be whatever the required network BTS fee is for the transaction and it goes into the reserve pool, but you can pay the fee using the MPA by using the MPA's fee pool. The issuer maintains a fee pool filled with BTS and publishes the core_exchange_rate (the price at which they are willing to exchange the MPA for the BTS in the fee pool for the purpose of paying transaction fees) and the blockchain takes care of conversion and fee payment automatically.

86
Technical Support / Re: memo field encryption
« on: July 08, 2015, 05:51:49 am »
I am looking for some information about exactly how the memo field is encrypted.  Specifically to non titan accounts.

If I send a transaction and include a memo, could I prove that this transaction contains that specific memo without exposing my private key?

Memos are encrypted using symmetric encryption where the shared encryption/decryption key is obtained through Diffie-Hellman key exchange between the sender and recipient. The sender uses a one-time key (the corresponding public key of which is publicly included with the transaction) with the recipient's public active key. Only the sender (through knowledge of the secret one-time private key and the recipient's public active key) and the recipient (through knowledge of their active private key and the one-time public key included in the transaction) are able to derive this shared key. If you want to prove to someone what the memo says (and who the sender is) you (and by you I mean either the sender or the recipient, but no one else) could in theory securely provide them with the shared key without any risk to the funds in that balance or exposing anything about your private keys.

87
100K TPS requires over 10 MB/sec of network bandwidth

I'm guessing that capital B wasn't a typo. Do we have any idea what sort of infrastructure requirements Visa has, or perhaps Verizon for tracking cellphone data/minutes usage? I'd imagine it is comparable.

Considering 100K TPS at 100 bytes per transaction would require more than 10 Megabytes per second and just one signature alone requires 65 bytes... yeah the capital B is not a typo. And if most of the transactions are confidential transactions, we are looking at an order of magnitude or more increase in network bandwidth (in that case, at 100K TPS, 1 Gbps would likely not be enough).

88
but how confident can the community be that we can ever foresee every attack vector?

0% :)


Network security comes at a cost. Under PoW, that cost is explicit. Under DPoS, that cost is opaque, but real nonetheless - the cost of voting. I've made this point previously, that it is not a verifiable claim to say DPoS is lower expense than PoW for this reason. Either DPoS also has a high cost, or compromises security.

Warning! Too much text below! Tl;dr: I try to analyze the operating cost difference between DPoS and PoW given the same amount of security for both against two particular classes of attacks, which I call a trust attack and a brute-force attack. Trust attacks require convincing others (miners in PoW, stakeholders in DPoS) to delegate their power to the attacker. The conclusion here is the obvious one we have discussed plenty of times in this community: DPoS is both more decentralized and has much lower cost for the same amount of security against this particular attack compared to PoW. A brute-force attack requires outright purchasing the fundamental consensus power (mining power in PoW, or stake in DPoS) and using it to attack the network in a way that the attacker hopes will end up being net profitable. For this attack, I try to compare PoW to DPoS with the security bond modifications that I proposed. This analysis is much trickier and requires a lot of assumptions, but my conclusion is that even with conservative estimates DPoS can be much cheaper to operate than PoW for the same amount of security against this brute-force attack. Finally, I conclude by noting that PoW's objectivity does provide some security advantages over DPoS under some attack scenarios, but my opinion is that this advantage is negligible compared to the much higher operating cost required for PoW.


PoW has better objective consensus compared to far more subjective consensus of PoS (DPoS included) systems. That is really useful when you want to be confident that you are likely on the correct chain even with a compromised internet connection as long as you have an estimate for the accumulated work done on the blockchain thus far. It is also useful in allowing everyone to come to a consensus on one particular chain (whether it is the chain they actually desire to accept or not) in the rare case of a successful long-term reorganization attack. On the other hand in PoS systems, under such a scenario it would require subjective consensus (relying on trustworthy nodes, businesses, other entities) to resolve which is or should be the "real" chain. Hopefully the economic incentives are designed to make such an attack very unprofitable and therefore very unlikely.

If we are willing to accept those disadvantages, we get a lot of benefits from DPoS as a result of this trade-off. One benefit is faster and more deterministic block generation. But the main benefit is a much lower network operating cost given the same cost to an attacker that is trying to attack the system through methods that don't require compromising the victim's internet connection (attacks where the attacker both compromises the victim's internet connection and has control over more than 50% of witnesses are more profitable in DPoS for a given network operating cost than in the similar case of a PoW system where the attacker has control of more than 50% of mining power).

So where does the cost reduction come from? To answer this, I will examine two different attacks. I'm going to call the first a trust attack and the second a brute-force attack.

The trust attack requires convincing other people with consensus power (hashing power in PoW or stake in DPoS) to delegate their consensus power to the attacker rather than to anyone else. In the case of PoW there is an economic incentive to delegate hashing power to an entity (mining pools) other than yourself as long as you trust them to honor the deal and pay you your fair share of block rewards. In DPoS, only the entities that are delegated the stake voting power, the witnesses, are allowed to produce blocks and only if they have sufficient approval. So again there will naturally need to be delegation of the consensus power. In both cases, the entity that you have delegated the consensus power to can break their vow and use the privileges granted from the delegated consensus power to attack the network in some way. However, they are pretty much guaranteed to get caught if they do so and their reputation will be forever destroyed. This means they cannot convince people to ever trust them with consensus power again, which means they cannot be a block producer again. Since they were rewarded for being a block producer, there is an opportunity cost in the form of lost future income that motivates them to behave. However, if the profit potential is worth more than this opportunity cost, it would be rational for them to attack the network assuming they are not concerned about other things like their reputation in real life (assuming their identity has already been revealed) or the value of their investments (which they want to hold) in the system they are attacking. What we have seen in Bitcoin is that the vast majority of hashing power is concentrated in a handful of mining pools. BitShares with its 101 witnesses is far more decentralized than Bitcoin in this manner. Collusion among mining pools to get 51% hashing power is thus easier than collusion among 51% of witnesses.

If we compare the operating costs between PoW and DPoS, we will see that they are not too different if you ignore the significant costs of mining. Some amount of the block rewards go to the mining pool operators (the profit after their operating expenses makes up their opportunity cost) and the rest get distributed to the actual miners. If we wanted a similar opportunity cost for witnesses, we would have to pay the active witnesses in aggregate the same amount as the fraction of block rewards that go to the mining pool operators (which is a tiny fraction of the block rewards since the vast majority goes to the miners). However, DPoS does not have to pay for miners, so its overall operating costs are dramatically lower.

What about another form of attack? I call a brute-force attack an attack that requires the attacker to purchase or otherwise obtain control over the actual consensus power directly. In the case of PoW, this means buying enough ASICs and paying for the electricity costs to operate them. In DPOS, this means buying the core stake with which they can vote for their own witnesses. Keep in mind that the attacker does not need to purchase these things legally; they can get control over them illegally too. In the case of PoW, this might mean they hack into enough miners' computers and hijack the block headers that their ASICs hash. In the case of DPoS, this might mean they hack into enough stakeholders' computers and steal the private keys controlling their stake. I am assuming that this kind of wide scale hacking attack is hard to do. Even if feasible, it is important to notice that the number of individuals to attack to get 51% of hashing power in PoW is very likely less than the number of individuals to attack to get a sufficient amount of stake to vote in 51% of witnesses (although the former group might have better operational security than the latter group, then again that is unlikely to make a difference).

One other thing to realize about a brute-force attack is that a lot of value spent acquiring this consensus power can be recovered after the attack. In PoW any electricity consumed is forever lost and cannot be resold, but the ASICs can be resold (granted for a lower price than they were initially acquired). However, if the ASICs are only needed for a short amount of time for the duration of the attack, the resale value of the ASICs may not be too bad. Similarly, an attacker can buy stake to vote in their evil witnesses to do the attack, and then immediately afterward sell the stake to recover costs as much as possible. It is only the net difference that the attacker needs to pay (in addition to electricity costs for PoW brute-force attacks) to carry out this attack. If the profit from the attack is greater than this difference, then it is rational for the attacker to carry out the attack. However, there are a lot of economic uncertainties here. After the attack is successfully carried out, the price of the stake will very likely drop significantly. But it is not clear whether this will be temporary or how significant the drop will be (I doubt a foreseen but theoretically rare attack like this would kill the coin). A drop immediately after actually helps increase net costs for PoS brute-force attacks, which is actually a good thing. However, a drop in the price of a PoW coin will also likely correlate with a drop in the value of ASICs that mine that coin. Thus the net cost to the attacker also increases for PoW brute-force attacks. Also, ASICs are a depreciating asset whereas the core stake can actually appreciate in value (sometimes a lot!), which is a win for PoW security as far as brute-force attacking costs go.

DPoS can improve its security by requiring the witnesses to deposit funds which can be destroyed by the network if they are caught cheating. We define the probability of successfully burning the deposit of an attacker's witnesses as p (it is safe to assume p is close to 1, e.g. p = 0.95). The value of the required deposited stake among all witnesses is C. In addition to the funds to cover node operating expenses, the blockchain pays witnesses a fraction f of the locked funds per year to compensate for the opportunity cost of locking the funds (f = 0.05 seems reasonable, which corresponds to a 5% p.a. return). The expected value of the cost to the attacker in control of 51% of witnesses (which is the minimum needed to take control of the DPoS network and carry out the attack) is approximately p*C/2 plus whatever extra cost they pay due to drop in value of their voting stake as a result of the attack (let's be conservative and assume this is zero).

In a PoW brute-force attack, the attacker needs to purchase enough ASICs to generate slightly more hashing power than the current aggregate hashing power of the entire network. After the attack, the attacker can then sell the ASICs to whoever wants it (rational greedy miners are likely not even going to care if they are purchasing useful ASICs from a known attacker, but most likely they won't even know who the attacker was). There is going to be some net cost Ca from this buy-sell cycle. The attacker will also need to pay for electricity to run the ASICs for the duration of the attack; call this cost Ce. If the attacker only wants to do this attack once, they will only need to run the ASICs for around 8 blocks or so (enough to do chain reorganization against victims who waited the full 6 blocks, or 1 hour, as they are told to do). Let's be generous and say they pay for electricity to run the ASICs for 53 blocks which would approximately take 8.8 hours, or 1/1000 of a year. Therefore, Ce can be estimated as 1/1000th of the cost of electricity consumed to run the Bitcoin network for a year. I am going to try to come up with some back of the envelope estimates for these costs. From this table I see that the most efficient (highest Mhash/J) ASIC is the AntMiner S5. It has a cost of 3,121 MHash/s/$ and an efficiency of 1,957 MHash/J. Bitcoin's current hash rate is approximately 400 billion MHash/s. This means $128 million dollars worth of these ASICs would be necessary which would consume 205 MW of power. Assuming an electricity cost of $0.08/kWh, it would take $144 million to run these ASIC for 1 year, but only $144,000 to run it for the desired 8.8 hours. Thus, Ce = $144,000. By the way, new BTC is currently being produced at a rate of $40,000/hour, or $350 million per year (according to current market price). So if we subtract the $144 million electricity cost to run those ASICs, that leaves $206 million per year of revenue to cover the capital cost of the ASICs and of course any profit. I am not sure what kind of ASIC the typical miner owns and how long they last before becoming obsolete, but these numbers seem reasonable as a sanity check on the math. To calculate Ca I will make a completely wild assumption that the attacker can sell their ASICs after the attack for less than a 10% discount. So let's say Ca = $12 million. Even if the the discount was 2%, it is clear that the loss in selling the ASIC outweighs the electricity cost.

The cost of a DPoS brute-force attack will be higher (and thus DPoS more secure in this particular attack) than the PoW brute-force attack if p*C/2 + Cs > Ca + Ce, where Cs is the net cost of buying enough stake to vote in the bad witnesses and then selling the stake (if desired) after the attack (I will assume this is its minimum value of zero to be conservative). The PoW network however has to pay a large expense to economically incentive the miners to actually mine. I will use the current Bitcoin expense as an example. As I mentioned before the Bitcoin blockchain is paying an expense Cw of $350 million per year currently to cover the electricity costs of approximately $144 million per year (or likely higher since I used a low electricity rate) and to cover the capital costs of an ASIC base worth (very) roughly $128 million. If I assume all of these PoW costs scale linearly with the blockchain expense (because difficulty will adjust), then a $350 million per year blockchain expense corresponds to an attacker expense of Ca + Ce, which is roughly somewhere between $150,000 (for a nearly 0% discount) to $12,144,000 (for 100% discount, i.e. cannot resell ASICs), or a ratio r  = Cw/(Ca + Ce) that is very roughly between 2333 to 29, respectively. The yearly cost to DPoS to pay for the opportunity cost of the locked stake is Cd = f*C, which must be greater than 2*f*(Ca + Ce)/p = 2*f*Cw/(p*r) in order for DPoS to be more secure than PoW for this particular attack. So with the conservative case of r = 29 (100% discount) and the other values, the minimum yearly cost for DPoS (excluding basic node operating costs) is Cd = 2*(0.05)*($350 million)/(0.95 * 29) = $1,270,000. More importantly, the ratio of the PoW cost (excluding basic node operating expenses, but I will still use the above Cw value since Bitcoin mining node operating expenses are currently negligible to hashing expenses) to DPoS cost (again excluding basic node operating expenses which should be similar to that of a PoW system) for the same amount of security against this particular attack is approximately Cw/Cd = p*r/(2*f) = (0.95 * 29)/(2*0.05) = 275.5 and potentially orders of magnitude greater (if the attacker can get a reasonable discount on ASIC resales).

The other thing to consider when measuring security is not just the profitability of the attack, but how much initial capital is necessary to actually carry out the attack. To carry out the PoW brute-force attack, the attacker would need approximately $130 million assuming we use numbers currently applicable to the Bitcoin network. In DPoS, the attacker needs enough stake to vote the witnesses in and enough for the deposit (which may or may not vote). The amount needed for the deposit is C/2, which should be greater than Cw/(p*r) if DPoS is to have lower cost than PoW for the same security against this attack. To fairly compare the PoW numbers with DPoS, we should assume that the DPoS core stake has a similar market cap as BTC (currently $3.8 billion) and conservatively use the Cw/r value of $150,000 (thus C/2 should approximately be $158,000 which is small relative to $130 million so we can ignore it, and we could ignore it anyway if the security deposit was allowed to vote since it offsets some of the need to buy additional voting stake). Even assuming a very liquid market (and/or stake bought and resold very slowly without compromising the attack), with just 0.5% of the stake being necessary it will already cost the attacker more initial capital than with the PoW brute-force attack. The attacker won't be able to get any witnesses voted in with only 0.5% approval. Currently approximately 13.5% of stake is necessary to get the majority of BitShares 0.x delegates voted in; let's assume similar voting patterns carried over to DPoS 2.0 witnesses. Ignoring the fact that purchasing 13.5% of stake would drastically raise the price (and thus market cap), this means that BitShares would have higher initial capital requirements for this attack than Bitcoin if it had a market cap of at least $963 million. With its current market cap, the initial capital requirements are only approximately $2.2 million (again not considering how the market cap would dramatically increase if someone actually attempted to buy 13.5% of all BTS).

It is important to note that these were only two classes of attacks. This rough analysis (I appreciate any corrections or improvements) hopefully shows that for the same amount of security against these attacks, a DPoS network costs less to operate than a PoW network. It does not say anything about the relative security of two networks for different attacks. As I mentioned in the beginning, there is a trade-off. We give up objectivity by going from PoW to DPoS. This makes DPoS less secure than PoW (almost regardless of operating cost) for certain attacks.

For example, if the majority of witnesses are colluding to attack a victim and they also control that victim's internet connection and can maintain control of that internet connection for over 2 weeks, then there is a some chance the victim can be kept in the dark about the double spend for long enough that it will be too late to punish the witnesses with a double sign proof that burns their deposits. Essentially the probability p gets close to 0 in this case which means the Cw/Cd ratio also falls down to a value close to 0 (and importantly less than 1 which means PoW is more secure against this case for the same cost). It is very questionable how realistic this attack scenario is. If the probability of the victim discovering the attack and providing the double sign proof to the blockchain in time can be kept above 2*f/r = 2*0.05/29 = 0.00345, then the DPoS system still has better security for same cost. In fact, given the numbers above, DPoS can have the same security as PoW against this attack with an order of magnitude lower operating cost as long as the probability of the attacker getting away with this particular attack without losing the security deposit is less than 96.5%. Increasing the 2 week delay to withdraw the deposit is an easy way to sufficiently decrease this probability of attack success (if even necessary) at the inconvenience of delaying how long it takes for a retired witness to get back their deposit.

Another case in which PoW's objectivity shines is when synchronizing the blockchain after a long time of being offline. Even if the user has no estimate for what the accumulated work done should be, assuming their internet isn't compromised they will likely be able to find the blockchain with the larger accumulate work done (the correct blockchain) without any trust. But with DPoS, a very large majority of old witnesses that were simultaneously active at some point in the past (but have long since stopped being witnesses and have withdrawn their deposit and so they have no stake to lose) could collude together to continue a fake blockchain from the fork point. If they also compromise the victim's internet connection, they can trick the user to sync to a fake blockchain and thus double-spend attack the victim. What is worse is that even after getting access to an uncompromised network some time after syncing, the victim's client may refuse to switch to the real chain because the fork point would be past the chain reorganization limit. Furthermore, if nearly all of the old witnesses colluded (so 99+% of the witnesses at a single point in time in the blockchain history colluded to make the fake blockchain history, and therefore could likely have a longer fake blockchain than the real blockchain which will naturally have some witnesses occasionally missing blocks), then the victim wouldn't figure out which chain was the real one even if their internet connection wasn't compromised at any time. In this case I believe the client should do the right thing and get stuck rather than picking one chain or the other (is that correct?), so the victim is not in any actual risk of a double-spend attack, but it is annoying because it then requires the victim to rely on his social network of trust to determine the correct chain (he needs to acquire a trusted recent checkpoint and add it to the client). Thankfully, with a 2 week withdrawal delay on the security deposit, witnesses who retired or were voted out less than 2 weeks ago will be highly unlikely to dare carry out this attack. This means that someone syncing the blockchain every week is in no real danger of this attack. Furthermore, if we assume witness turnover is slow, it is unlikely that there will be enough old witnesses with nothing at stake that are willing to collude to attack users who haven't synced for a couple months (not to mention that it is difficult to know who specifically to target). However, it is probably prudent to assume that if someone hasn't synced the blockchain for several months, they should first acquire a recent trusted checkpoint and add it to their client (assuming it isn't already done automatically in their most recently downloaded version of the client). Finally, most people would be using a lightweight client setup anyway, so all of this responsibility is placed on the host and the users are simply trusting that the host will not double-spend attack them because it would destroy their reputation and future business.

89
My thinking was that if the attacker used their own stake to vote in all their delegates, then transferred to an exchange and sold (losing the votes), they would have more time to sell and execute the attack if votes are only tallied once a day, instead of immediate.

I see. I'm not too worried about that attack considering the difficulty of controlling enough stake to unilaterally vote in enough active witnesses for an attack.

I like the idea to require witnesses to post a bond, which they lose if they participate in an attack.  This helps make attacks more costly.

I also discussed elsewhere how we can have a super majority of the witnesses confirm transactions in just a few blocks (2 to 3 seconds) rather than waiting for N/2 blocks where N is the number of witnesses. If users wait for those 3 blocks before continuing with irreversible transactions, they are protected from even a large minority of colluding witnesses. A majority of witnesses is even harder to get voted in (would likely require more stake in the attack you described since the approval rating of witnesses increases as you go up the ranks) and there would be a larger aggregate bond deposit at stake among the colluding witnesses.

Furthermore, I'm just now thinking about how the chain reorganization and "longest chain" rules could be modified to take into account double sign proofs. Even if a chain is shorter in a round, if it has a double sign proof showing the other witnesses in the longer chain are banned because of double signing, all clients (assuming their internet connection isn't completely compromised) should be able to ignore that longer chain and come to a consensus on the shorter chain. The blockchain protocol could also specify that a valid double sign proof would force an early vote tally which would allow the banned witnesses to be immediately removed from the active witness set and be replaced by the other standby witnesses waiting in the ranks. This would mean that even if 1 honest active witness still exists, double sign proofs of the other bad witnesses submitted to the network could automatically allow the blockchain to recover within N seconds (where N is the number of active witnesses) after the double sign proofs were submitted to the network. Granted a new user with a compromised network connection could still be tricked onto the invalid chain with the majority of the witnesses (despite the fact that they are banned on the "real" chain) still building blocks, but that attack is already possible (and hopefully very unlikely to occur) in the current system.

90
Regarding the once a day vote compilation, that seems like a problem to me.  If there is a problem, people need to be able to vote a bad actor out as soon as possible, not in 24 hours.  (maybe it doesnt need to be in 1 second, but it needs to be sooner than a day). 

If you only tally votes once a day, an attacker could have an entire day to sell off their stake and then execute an attack, increasing the vulnerability to a nothing at stake attack.  (You can get a lot more value back selling over a day than you can in only 15 minutes). 

Why would the attacker have to sell over 15 minutes or a day? They don't need stake to be a witness. They can sell it off over a year or more. Or never have stake to begin with (other than enough to register their witness). What is at stake is their future income earning potential. If we implement this proposal then the amount deposited in their bond will also be at stake.

The 1 day tally isn't a big deal. Voters are going to be much slower than that to react. It will probably take 1 week for stakeholders to react enough to get rid of a witness. This is why we need the blockchain to automatically ban a witness (and take away their deposit) if someone submits a double sign proof (again see my linked proposal).

Pages: 1 2 3 4 5 [6] 7 8 9 10 11 12 13 ... 81