Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - arhag

Pages: 1 2 3 4 [5] 6 7 8 9 10 11 12 ... 81
61
General Discussion / Re: Mumble Attendance Today
« on: July 26, 2015, 03:05:44 am »
Forum: arhag
Mumble: arhag
BTS: arhag

62
General Discussion / Re: Brownie Distribution Update
« on: July 26, 2015, 03:05:11 am »
|               arhag               |                arhag                |               arhag           |

63
We still need libsecp256k1-zkp ported to Javascript for the web wallet. Have you looked into emscripten for that? I suppose one of the advantages of the Qt lightweight wallet that Nathan is working over the web wallet is that it can directly link with the C code. 
What would it cost to have YOU do it right away?
I feel you know the graphene code well already and am pretty damn certain the devs can need your assistance almost everywhere!!

https://arhag.github.io/libsecp256k1-demo/

Bam!  ;)

Well, it is only an initial demonstration of just the signing and verification. But the hard part (took me about 25 to 30 hours) was getting the framework of the wrapping code to work well and debugging all kinds of build system issues with Emscripten (the Closure compiler variable renaming drove me nuts). Adding the additional C functions to the API should be relatively easy.

It isn't very well documented but the code can be accessed from here: https://github.com/arhag/crypto-experiments/tree/emscripten/emscripten/libsecp256k1-demo
Still lots to be done with the code, and I should put up a better README explaining how to build it. But for now a skilled user should be able to figure it out.

First, you need to install the Emscripten SDK. Get and compile the portable SDK from the website. The one in the Ubuntu repos is older and wasn't able to compile the code because of a lack of __int128 support. Then with the PATH setup, you can simply enter the libsecp256k1.js-build folder and run
Code: [Select]
./configure
make
And it should hopefully compile everything and even install the built libsecp256k1.js into the parent folder. I already included a pre-built version in the GitHub repo, so if you want to make your life easier and you don't need to change any of the Javascript on the library side, you should be able to use that. Then you can just serve the static content in the parent folder (libsecp256k1-demo folder) using whatever web server tool you want. If you have Emscripten already installed, you can simply run
Code: [Select]
emrun secp256k1-test.html
from the folder and it should even launch your default browser loaded to that page.

If you want a un-optimized debug build (without all that minification and variable renaming by the Closure compiler), just set the DEBUG environment variable to 1 before calling configure. So, for example:
Code: [Select]
DEBUG=1 ./configure
make

I should note that I have only tested this on a Linux machine and on the Chrome browser. So your mileage may vary. Let me know if it doesn't work on certain machines and browsers. I can tell you already that the build scripts are *nix specific, so they won't work out of the box on Windows. Also, I know that older versions of IE won't be able to even run the pre-compiled version.

One other thing I should mention is that a lot of the code complexity in the wrapper comes from the fact that I am loading the Emscripten-generated code in a Web Worker. This requires all kinds serializing and deserializing of the arguments and return values of the API calls so they can be sent through postMessage. The reason for this is because I noticed that the initial generation of the context takes a long time (several seconds). I thought it would be unacceptable to have the GUI freeze while it was loading. Also, in theory some of the other functions could take a long time too. It is much better to have that computation occur on a separate thread to keep the GUI fast and responsive.

This however does mean the API is a little bit more difficult to use because it is asynchronous. But Javascript users are used to that already. I make use of Promises to avoid callback hell and get code that looks very similar to synchronous code. Finally, it should be noted that though the entire Emscripten-generated code is running in parallel in its own thread (in its own Web Worker), the code is still inherently single-threaded. Emscripten does not (yet) support compiling threaded code. For that reason I have some checks to make sure that the API can only be called one at a time. You must receive the return value from your previous call before you are allowed to make a new call.
 

64
General Discussion / Re: Non-Bitcoinian Geometry
« on: July 24, 2015, 02:33:13 am »
(Kindof like how we ran out of money 1/3 of the way through and luckily ran into paid delegates and cryptonomex funding).

Using the word "luck" implies paid delegates and Cryptonomex somehow just fell out of the sky just in time to save us. Running out of AGS funds motivated the need to implement paid delegates. BitShares price crashing motivated the creation of Cryptonomex. It's straightforward causality not luck.

Yeah I was making a joke.

Sorry the pedant in me was unable to appreciate the joke because of the failure of the metaphor: in the case of Columbus, if they weren't fortunate enough to have a continent exist in a place none of them knew existed, they would have no other option but to starve at sea; with BitShares we have far more options to cleverly pivot and adjust the plan in order to survive and so we aren't as dependent on dumb luck to succeed. The cryptocurrency business is tough, but not as tough as the ocean.  :)

65
General Discussion / Re: Non-Bitcoinian Geometry
« on: July 24, 2015, 02:15:38 am »
(Kindof like how we ran out of money 1/3 of the way through and luckily ran into paid delegates and cryptonomex funding).

Using the word "luck" implies paid delegates and Cryptonomex somehow just fell out of the sky just in time to save us. Running out of AGS funds motivated the need to implement paid delegates. BitShares price crashing motivated the creation of Cryptonomex. It's straightforward causality not luck.

66
General Discussion / Re: Updates on Compact Confidential Transactions
« on: July 24, 2015, 01:03:19 am »
Hmm, that's pretty awesome.

A few things I am hopeful about with this new approach.

I hope that it is flexible enough to dynamically adjust the number of bits used for the value below the constant max (of 64 bits in the paper) as a way of proving tighter upper bounds. It seems reasonable to me that one could do that by reducing b and thus T, but I am not sure if there are any wider implications by doing that. With something like that and of course splitting off a portion of the value and revealing it (as the minimum bound), it should hopefully be possible to prove with this method that a value is within a particular interval while still being compatible with other blinded values that are using the same protocol.

Although dynamically reducing the interval with this method does not give us any space advantage like with the previous Confidential Transaction algorithm, it is useful for other reasons. Particularly I am imagining a future enhancement to the BitShares protocol that allows two-factor authorities to sign your transactions that have blinded amounts but only on certain conditions like if the unknown amount being transferred is less than some limit (to protect their customers from theft by hackers). So the two-factor authority would require that the transaction they sign include a range proof of the blinded amount withdrawn where the upper bound of the proved interval is less than the limit specified by the authority. This way advanced limits (that depend on the level of authentication provided to the two-factor authority) can be placed on fund transfers without revealing the actual amount to the two-factor authority.

The other thing that I am more optimistic about is increasing the 8 bits of buffer they provide (which only allows up to 255 additions of blinded amounts) to 64 bits of buffer. This would likely require a larger curve order which means it may add a few bytes to the proof (I believe it would only add 14 extra bytes to the existing 258 bytes for each output). While a transaction is unlikely to ever need more than 255 outputs, if we want blinded amounts to work with dividends (as I have outlined in this proposal for the original Confidential Transaction algorithm), we will need the extra space to guarantee no overflows even when multiplying one plain-text amount with another blinded amount (which is an essential part of the blinded dividend algorithm). I haven't thought about the details of how it should be done yet, but I believe it is possible to modify that blinded dividend algorithm to work with this new Compact Confidential Transaction algorithm.


67
Besides after blinded signatures were figured out (bounty?) then i think rest fell inplace

Just to clarify, confidential transactions (aka blinded amounts) don't really have much to do with the blinded signature algorithm that a bounty was offered for (aka Oleg Andreev's algorithm). The former hides the amount of stake an account holds or transfers. The latter allows for multisig / two-factor authentication where the third-party is unable to tell which transaction they signed. The latter is useful in stealth transactions because it allows the recipient to use multisig without revealing to anyone (including the signing third-party) the metadata of which balance belongs to which user (even if they still couldn't tell the actual amount of the balance since it would be blinded using the confidential transactions algorithm).


No wallet support yet, but that is relatively easy.

We still need libsecp256k1-zkp ported to Javascript for the web wallet. Have you looked into emscripten for that? I suppose one of the advantages of the Qt lightweight wallet that Nathan is working over the web wallet is that it can directly link with the C code. 

68
Stakeholder Proposals / Dividends with Confidential Transactions
« on: July 20, 2015, 11:15:45 pm »
I think we need a nice dividend feature that only requires the issuer to spend a fixed transaction fee rather than transaction fees that scale with the number of recipients of the dividend. While that feature is not strictly necessary to distribute dividends to a snapshot, it is incredibly useful.

However, if the issuer wants to distribute dividends to assets that are blinded using Confidential Transactions, the snapshot and sharedrop method does not work because the issuer cannot know the amounts of the asset each recipient holds. It is in theory possible to distribute dividends to asset holders who blind the asset amounts (that is what I will discuss in this post) but of course it requires new operations. So I strongly encourage a dividend feature that can be used to sharedrop any asset on to any other asset whether blinded or not.


Blinded amounts work through a Pedersen commitment (C) of the asset amount (v, taken to be non-negative) using a random blinding factor (b), e.g. C1 = b1*G + v1*H, where G and H are two different generators that are points on the elliptic curve and H = z*G for some unknown (to anyone) scalar z. A range proof proves that the integer v is within some specified interval (typically we will use the interval [0, 264)) without revealing v or the blinding factor b (the prover needs to know v and b to construct the proof). This is important because we want the fact that the sum of multiple (where multiple means a number less than n/264, where n is the order of the curve, which is practically guaranteed given how large the curve order is) input commitments equals the sum of multiple output commitments to imply that the sum of the values of each of the input commitments equals the sum of the values of each of the output commitments. And that won't hold if there are overflows. So the range proofs on each of the output commitments (the input commitments are assumed to already be in range from either an earlier range proof or because their values were publicly known) guarantees that there won't be any overflows.

Any time someone creates a dividend event, a virtual snapshot is taken on all balances in the database of the asset that the dividend is being sharedropped on. Each asset balance is not just one quantity but rather a current quantity and a (potentially empty) map from dividend events to quantities. (Note when I say quantity here, it can refer to either a plain-text quantity or a Pedersen commitment or even both). The timestamp of the last update to the balance determines how the blockchain should treat any modifications (deposits and withdrawals) to this balance. If the timestamp is not more recent than the most recent dividend event targeting that asset, then the blockchain first needs to add items to the map for each of the dividend events (as the key) that occurred targeting that asset since the timestamp and with the value of each item to the current quantity. Then the blockchain can modify the current quantity as necessary and of course update the timestamp. Each item in the map technically represents a separate asset (it is the ephemeral asset that is used to withdraw the actual dividend asset that rightfully belongs to the user). These ephemeral assets cannot be traded or transferred, they can only be used to withdraw the dividend. After withdrawing the dividend, that item is removed from the map. The item can also be removed from the map anytime after the dividend event referred to by the key has expired (typically such expired items will be checked and removed next time the balance needs to be modified, but perhaps they might also be purged globally at every new maintenance interval).

One thing to be careful with is how to treat assets that are in smart contracts rather than held by accounts. I'm not sure what should be done generally. But at least for assets held in open orders, the quantities in the maps of these orders should be merged in with the correspond map in the account that owns these orders when the orders are cancelled or completely filled. Since the quantities in open orders should be plain-text, it is possible for the blockchain to merge these values through a simple sum without any cooperation needed from the owner (it is actually possible to merge even if the amounts were blinded by just summing the commitments since the owner could calculate the summed blinded factor).

A dividend event records the total aggregate supply (S) of the target asset at the time of the dividend. It also records the total amount of the dividend asset (D) which is "destroyed" as part of the dividend event.  It is very simple for a plain-text withdrawer to withdraw their dividend using the corresponding ephemeral asset (of amount a). The simply use that ephemeral asset (and destroy it in the process) in a dividend withdraw operation to claim an amount (a*D)/S of the dividend.

For a blinded withdrawer to withdraw their dividend the process is a little more complicated. They again use/destroy the ephemeral asset (this time it is a commitment C1) in a dividend withdraw operation to claim some dividend with commitment C2. However, to do this a few things are necessary. First, they must also include a third commitment C3 in their operation. They must include range proofs for both C2 and C3. And finally, the blockchain must verify that S*C2 + C3 == D*C1 (mod n). (Alternatively, the withdrawer can just reveal the amount of C3 and in that case only one expensive range proof is necessary, but I am not sure what kind of privacy leaks revealing the value of C3 may have. So, it is probably better to just accept the cost of the two range proofs for the extra assurance on privacy.)

Now I will prove that the withdrawer can generate C2 and C3 satisfying the above requirements given that they know b1 and v1 where C1 = b1*G + v1*H, and that the above requirements make sense for a dividend distribution:
S*C2 + C3 == D*C1 (mod n)
S*(b2*G + v2*H) + (b3*G + v3*H) == D*(b1*G + v1*H) (mod n)
(S*b2 + b3)*G + (S*v2 + v3) == (D*b1)*G + (D*v1)*H (mod n)
Therefore, b2 == S-1*(D*b1 - b3) (mod n),
and  S*v2 + v3 == D*v1 (mod n).

Given that the values S, D, v1, v2, and v3 are all less than 264 and n is greater than 2128, then if we further assume that v3 is less than S, we can conclude that v2 == (D*v1) / S and v3 == (D*v1) % S, which is the way the withdrawer calculates those values. And even if v3 is not less than S, that only hurts the withdrawer because it reduces the value of v2, which is the quantity of the dividend asset the withdrawer gets to withdraw. Also, the withdrawer can choose any random value for blinding factor b3, and from that can calculate b2 = S-1*(D*b1 - b3) (mod n), where S-1 is in the interval [0, n) and is the inverse of S such that S-1 * S == 1 (mod n). With all the blinding factors and values known, it is possible for the withdrawer to calculate the commitments C2 and C3 and their range proofs.

There will likely be some recipients of the dividend who held the target asset as a blinded amount and who do not bother to go through the withdraw process. Even if they all did, because of rounding errors there would be some amount of the dividend still left unclaimed. If we were not dealing with blinded amounts, it would be possible to known exactly how much was unclaimed by the expiration time and the issuer could take the unclaimed amount back. However, because of the nature of blinded amounts, it is not possible for anyone to know how much of the dividend is left unclaimed. Therefore, we treat distributing the dividend as a two part process where the full amount of the dividend is first destroyed at the time of the dividend distribution and then later some of that destroyed amount is automatically reissued through the withdraw process. However, because of rounding down and some recipients not bothering to withdraw by the expiration time, that new amount will be less than the amount destroyed. But anyone calculating the supply must assume that the full amount of the dividend is still circulating (even though some unknown amount of it is forever inaccessible).

This is a bit troubling when the dividend asset is a BitAsset. It means that it will be impossible to fully unwind the BitAsset supply to zero. But that was already the case if we assumed some users would lose their private keys. When a particular BitAsset becomes obsolete and naturally unwinds to a very small supply, it won't take too much longer for a forced settlement to eventually occur allowing the supply to finally unwind to zero. In that case, there will be some amount of BTS held by the blockchain that will never be claimed because there aren't any remaining accessible BitAssets to claim them. To prevent blockchain bloating, there can be an expiration time (of several years) for users to claim the collateral asset of their settled BitAssets. After that expiration time, the remaining BTS held in the pool can be destroyed along with all the other metadata that was necessary to store in the database to allow people to claim the collateral.

The dividend feature is also useful for any UIA holders who want to transition from allowing their UIAs to be used with Confidential Transactions (which necessarily prevents them from being able to seize the UIAs) to disallowing it (for example because they now want to be able to seize the assets or keep track of the amounts their customers hold). They would first issue a new UIA to eventually replace their old UIA. It would forbid its use in Confidential Transactions but otherwise be identical. The issuer then does a 1-to-1 dividend of the new UIA dropped on the old UIA. Right after that moment, the old UIA becomes worthless (by decree of the issuer anyway) and all of its value goes to the new UIA. It is probably best for the UIA issuer to halt all trading of the asset prior to the dividend. Holders of the old UIA at the moment of the dividend still have some time (likely more than a year) to withdraw the new UIA before the dividend expires. However, to actually withdraw, they are forced to reveal the blinded amounts of their UIA since they cannot use blinded amounts with the new UIA. After the dividend expires, the issuer can then destroy the old UIA (I assume that fully destroying a UIA is a different permission than seizing it, because even when some people use blinded amounts with the UIA it is still technically possible to destroy the UIA even if it is not technically possible to seize it), which frees up any old blockchain metadata that was only supporting the old UIA. Through this process, the public is also able to know the true supply of the UIA rather than just an upper bound.

Another useful way to use the dividend feature is to change the precision of the asset (or conceptually change the unit of value represented by a single satoshi of the UIA). The issuer can create a new UIA that is identical to the old one except it has different precision (either actually and/or conceptually). Using a mechanism similar to that described in the previous paragraph (although no need to disallow blinded values), the issuer can cause an instantaneous transfer of value from the old UIA to the new UIA, but allow users to transition from old to new over a much longer period of time. A procedure like this would be very useful (especially if automated) for a UIA that inflates at a fast rate (and doesn't deflate) as a mechanism of redistributing the value of the asset from some holders to other holders (e.g. a UIA used as a reputation coin in an Augur-like decentralized oracle).

69
Technical Support / Re: Can we do these things
« on: July 20, 2015, 08:04:49 pm »
For dividends to work, we need to be able to issue a dividend (or sharedrop), with only one transaction fee.

While not essential for a dividend to work [1], I agree it is highly desirable. That is a feature we could vote for with a worker proposal in BitShares 2.0.


Say if I create a UIA and I want to share drop it on Brownie-PTS, would I have to pay a transaction fee for each account I am share dropping on?

Yes.

Surely thats not the case, how did NOTEs issue their UIA (they are slightly different because they sharedropped on PTS/AGS, but they still have a share dropped UIA.

It's the same thing with the NOTEs sharedrop. They paid all those transaction fees.


Edit:
[1] A special dividend operation is not necessary to achieve the effect of a dividend distributed to a target asset that is not blinded. However, a special dividend operation is needed if we want to distribute dividends to assets that we want to use in Confidential transactions. For example, if someone wants to sharedrop a UIA on BTS holders, they are not able to give it to anyone who keeps their BTS amounts blinded for privacy reasons. The BTS holders would be put in an unfortunate position of either giving up their privacy or giving up the sharedrop. The solution to avoid forcing the user to give up either is to create a special dividend operation that also works well with blinded amounts. I discuss such an operation in a proposal I posted here.

70


Looks nice.

But to @svk:

The value, amount, and price quantities in the GUI aren't right aligned and also seem to be missing comma separators for large numbers.

There should be a quick toggle button to switch the Quote and Base assets (thus inverting the prices and switching the Value and Amount fields).

Since there is so much horizontal space in the middle column, I think it would be nice if it also included a Value column so that people could see the value of their order in units of the Base asset.

Also, I assume that this screenshot is older since here I see you were arguing for a symmetric ordering for the buy and sell columns of the user's unfilled orders, which I strongly agree with.

Finally, I don't like the way the timestamp is represented. First, I hope that the representation of the timestamp will adjust to fit with the user's preferences (so this would be an option in their preferences to globally adjust date and time display in the client, as well as other locale-related things like the characters to use for the decimal vs the separator). But if you were going to choose a default, I think the one represented in the screenshot isn't a good default. If you look at the time alone, then reading left to right, it goes from more significant units to less significant units (hh:mm:ss), but if you look at the date alone, it goes from less significant units to more significant units (DD:MM:YYYY). Putting it together and reading it from left to right, the significance of the units don't follow a nice pattern. So I think the YYYY:MM:DD, hh:mm:ss format makes the most sense as a default since the lexicographical order is equivalent to the chronological order.

71
That is all part of 2.0. I couldnt come up with something BTS stakeholders could vote on other than what you mentioned.
Even though we can not come up with something like this NOW does not necessarily mean your future us can not :)
From what I understand from the code of Graphene, the code can be easily extended to allow for many more "voting" possibilities.

I don't yet know much about FMV, but I can imagine extra tokens that can vote for proposals that are independent of the BitShares Ecosytem.

The way I see it, BitShares 2.0 stake-based voting and Follow My Vote voting are two different ways of voting. The first is not private and weighs votes by the quantity of the asset (at first just BTS but in theory any asset) that is authorized for the particular election. The second is a "1 final vote/ballot per unique token" system where unique tokens are publicly assigned on the blockchain by the election authority prior to the election opening (typically with 1 token assigned per every unique person qualified for the particular election) and it uses advanced cryptography to provide privacy for the voters (disassociates the entities who were assigned the unique voting tokens in the public pre-election phase from the actual ballots submitted by each token after the election opens).

The first kind of voting is much easier and better suited for stake-based voting on proposals (despite the fact that it doesn't give us true privacy but rather just pseudo-anonymity assuming our account's aren't tied to our real identities). That is good enough for the type of decentralized governance Troglodactyl is talking about. So I would say after the transition to BitShares 2.0 is complete, we don't need to wait for an extra step 2 for Follow My Vote style voting features. We can and should immediately get started with voting on worker proposals for important new features to add to BitShares 2.0 such as (in no particular order): bond markets; option markets; privacy features (confidential transactions aka blinded amounts, stealth transfers with blinded signatures of the type that provide private multisig, transaction types designed to easily facilitate CoinJoin-like functionality, etc.); Follow My Vote style voting features; escrow transactions and basic payment channels; a full micropayment network à la Bitcoin's lightning network; LS-LMSR prediction markets; Augur-like decentralized oracles; and more.

72
Every time a block is missed the undo history grows by 2 so if 34% of the witnesses notice vote censoring they can simply stop producing on one fork and start producing on the other.  This would cause the undo history on the majority fork to grow to its maximum value (1000) and then all block production would stop.     

Yeah it isn't a problem if more than a third of the witnesses are good. But I think the probability of successfully recovering from a censored chain gets bad very quickly when it is less than a third.


Chain reorganizations are VERY BAD.  A reorganization of more than 10 blocks is a disaster because many transactions would get orphaned because they reference a more recent TAPOS block.

That's true. In that case we need an alternative method to allow the good guys to recover from a censored chain.

First, there should be a way to call an early vote tally (and early maintenance interval change) under critical times. I believe the only reason a maintenance interval isn't each block or even each round is because of the unnecessary burden it places on computing the vote updates? I'd appreciate if you could explain the reason behind it seeing how DPOS 1.0 updates votes every block (or is it every round?) already. I'm assuming it wouldn't be a problem to add that burden in rare situations (a witness could even post a security deposit they forfeit if it turns out the rare situation, which could only be determined by first doing the vote tally, was not true).

Second, I think we should change the longest chain metric to a more general longest accumulated weight metric. Each active witness has a dynamic weight at each point along the blockchain. Initial (and typical) weights could be 1/N where N is the number of active witnesses. But whenever a new active witness is elected, then in their first round (meaning each round in which they are an active witness but also were not an active witness in the previous round) they get a one round boost where their weight becomes b/N for that round (with boosting factor b > 1). In the next round their weight goes back down to 1/N.

If we set an appropriate boosting factor parameter, and allow an active witness to call an early vote tally (potentially with a security deposit, but I think the threat of being voted out for crying wolf is enough motivation for it to not be abused) which goes into effect (as a new round with the new set of witnesses shuffled) immediately in the next block, then we can allow even a single honest witness to recover a censored chain assuming they can get enough legitimate unexpired vote transactions to vote the censoring witnesses out.

Edit: In fact, we could go further to allow the system to recover even if all active witnesses were compromised. Every round the system can randomly select a single standby witness where the probability of selecting the witness depends on their approval. This selected witness would have the opportunity to produce a single block at the beginning of every round if they also pay a small fee in that block (so not only do they not get paid to produce the block they have to actually pay for the privilege). Their weight for that block would be boosted to b'/N with b' > 1 to give their block an advantage over the block by the regular active witness scheduled for that same slot. The fee is set so that it is normally unlikely for any standby witness to take "advantage" of this opportunity to produce a block (and therefore avoid situations where there is normally a couple of blocks that need to be rewound every round or alternatively where the first active witness of each round gets skipped) and to compensate the network for the cost of having to tally votes (see next sentence). However, if the block by a standby witness causes a significant fraction (doesn't have to be majority but it should probably be larger than 25%) of active witnesses to be voted out, then an early maintenance interval is called, and the standby witness that produced that block will get back their fee plus some interest (perhaps considerable interest). So even if the standby witness scheduled for that round isn't going to get voted in himself, he is still economically motivated to do the right thing which helps the network recover.

73
The number of blocks of "undo history" that we maintain is now dynamic based upon witness participation.

https://github.com/cryptonomex/graphene/issues/160

Doesn't a minimum undo history of 10 mean that if a good witness wants to correct a chain that is controlled by censoring witnesses, they need to make sure that they can become the longest chain with the new set of elected witnesses in less than 10 blocks prior to the fork point? This seems way too limiting.

I haven't yet properly done the analysis with this new dynamic undo history size change, but from some simple initial examination, I think the minimum undo history size should also be dynamic and be proportional to the number of active witnesses. I believe a minimum undo history size of 5 times the number of active witnesses, should allow even 10% of honest active witnesses to recover a censored blockchain from the majority bad witnesses with high probability of success by strategically collecting (hopefully still unexpired) vote change transactions and initiating the fork in the round just prior to the end of a maintenance interval so that the newly voted in honest witnesses can become active immediately in the following round.

74
As far as understanding what happens, nodes ONLY get stuck if they travel down the wrong fork for 1000 blocks otherwise they will resolve the issue themselves.    That said it should be impossible to create a longer fork that is 1000 blocks different from the main fork without over 51% witness collusion.   

Great, this was very useful information to help me understand.

Now I am trying to understand how the client even evaluates other potential blockchains. Does it not need to consider every possible blockchain that forks off from a block in the last 1000 blocks from its current head block? Does it do this concurrently while syncing? Would this not potentially take a considerable amount of processing and memory since there is nothing to prevent an unlimited number of potential (not necessarily longer length) forks (especially if the offending witnesses don't care that they double sign)?

I mean if a client is syncing from the past along on a particular blockchain (with many more blocks to still sync) and a new block appears that forks off a block 10 blocks ago from the current head block that the client is on, shouldn't it evaluate that block since it potentially could be the start of another much longer blockchain than the one it is currently on? Imagine a nearly fractal like tree of blocks that has a short length (so all but one branches are less than 1000 blocks from the trunk). In order for the client to find that one that is actually the longer chain, doesn't it need to evaluate all of those branches? A branch that has low (e.g. 60%) witness participation early on might end up consistently maintaining 100% participation later in its branch, while a (fake) branch with 90% witness participation in the beginning of the branch (during the same time the real branch was at 60%) may never exceed 90% or may even stop dead soon after. So I am not sure how the client could dismiss branches early.

75
The network is "safe" against both of those attacks so long as 5% of the witnesses do not collude.  That 5% would be enough to allow the minority chain that had support of the stakeholders to elect new witnesses and eventually overtake the attackers chain which would be stuck at 95% participation assuming that in a healthy network participation averages over 95%.

I think this is a different scenario from the one I was discussing, but an interesting one nonetheless. I don't understand how the protocol allows a minority chain to recover. I understand that a minority chain could include transactions that change enough votes to vote out the other 95% of bad witnesses by the next maintenance interval, but how exactly do clients decide to switch to that chain given that it is a minority chain? What if there were multiple minority chains that elected a different set of witnesses? How do clients choose which one to adopt?

I also want to add to these questions a bit to hopefully make it more clear what I am asking here.

Let's say there are N = 100 witnesses all producing blocks and everything is good. A new round begins (which is also the second to last round of this maintenance interval) and it is ordered according to: W1, W2, ..., W100. When it is W2's turn they produce a block that has a transaction that votes out witnesses W1 and W3 to W100, and replaces them with W101 to W198 (this transaction is the deciding vote, meaning if included it will kick those witnesses out and replace them with W101 to W198). When it is W3's turn to produce a block, they build off of W1's block, not W2's block, but nevertheless include that transaction that votes out the other witnesses. When it is W4's turn to produce a block, they build off of W1's block, not that of W2 or W3, and they do not include that vote changing transaction. W5 builds off of W4's block and doesn't include the vote changing transaction as well. And similarly for W6 to W100, thus ending that round with a witness participation of 98% (since the blocks produced by W2 and W3 were not included in the majority chain and therefore considered missing).

Now at the start of the next block (which is the last round of this maintenance interval), a witness shuffle takes place in the majority chain using the accumulate random number (which I guess did not include W2's and W3's random numbers since they were not revealed in the main chain). W2 and W3 are ordered somewhere in the middle of that round and will not produce blocks that round, even if they did they would likely be skipped over by the other witnesses. The other witnesses will also not include the vote change transaction.

What about the other orphan forks by W2 and W3? What happens at the end of the previous round when 99% of their witnesses were missing blocks. Is there a witness shuffle? If so how does the algorithm work? In fact, more generally how does the random number commitment, reveal, and accumulation process work in light of missing blocks in rounds or witnesses being voted in and out in maintenance interval transitions? I presume however the witnesses are ordered in the new round, both W2 and W3 will have an opportunity to produce a block in each chain. So let's say W2 builds a block off their own block in their chain and W3 builds a block off their own block in their own chain.

Let me first get some notation out of the way.

B(W, t) means a block signed by witness W with timestamp t.
The blockchain interval is Δt.

B1 ← B2 is a blockchain subsequence that means block B2 builds off block B1.
B1 ← B2 = B(W3, t) ← B3 denotes a blockchain subsequence of three blocks where the middle block is given an alias of B2 and has a timestamp t and was signed by W3.
B(n) denotes a blockchain starting from the genesis block and ending with a block with block number n.

P(t : time, i : bounded_int([0, size(L)!)), L : [optional(witness_t)]) defines a particular blockchain subsequence given by the following pseudocode:
Code: [Select]
for all types I,
fn permuted<I>(acc : [I], indices : [I], j : uint) -> [I];
// Deterministically permutes seq using index j according to whatever blockchain protocol rules require.
// Note that with no restriction on the permutation the type of j should actually be bounded_int( [0, size(indices)!) ).
// However if the blockchain protocol rules are more restrictive than the bounds on j need to be tighter.

// For completeness here is one permutation implementation that allows any permutation of indices (no further restrictions like those needed for DPOS 2.0).
fn permuted<I>(acc : [I], seq : [I], j : uint ) -> [I]
 {
     ASSERT( j < size(seq)! );
     match size(seq) {
          0 => acc,
          m =>  let j1 = j % m in
                    let j2 = j / m in
                    let swapped_seq = swap(seq, 0, j1) in // let swapped_seq equal seq except with the values at the index 0 and j1 swapped
                    permuted<I>(append(acc, head(swapped_seq)), tail(swapped_seq), j2)
     }
}

fn helper(start : time, acc : [(time, witness_t)], seq : [optional(witness_t)]) -> [(time, witness_t)] {
     match seq {
          empty_list => acc,
          prepend(head, tail) => match head {
               Some(w : witness_t) => helper(start + Δt, append(acc, (start, w)), tail),
               None  => helper(start + 2Δt, acc, tail)
     }
}

datatype block = Block(witness_t, time)
datatype block_subsequence = Rest | Chain(block_subsequence, block)

fn helper2(L' : [(time, witness_t)], acc : block_subsequence) -> block_subsequence {
     match L' {
          prepend((head_t, head_w), tail) => helper2(tail, Chain(acc, Block(head_w, head_t))),
          _ => acc
     }
}

fn P(t : time, i : bounded_int( [0, size(L)!) ), L : [optional(witness_t)]) -> block_subsequence {
     let L' = helper(t, [], permuted<optional(witness_t)>([], L, i)) in
     helper2(L', Rest)
}
L is a sequence of witnesses, e.g. [Some(W1), Some(W2), None, Some(W5), ...], that defines the witnesses that will be participating in block signing and how many are missing. As a shorthand [Some(W1), Some(W2), None, Some(W5), ...] can be written as {W1, W2, _, W5, ...}. L' is a sequence of (time, witness_t) tuples, e.g. [(t, W2), (t+2Δt, W5), (t+3Δt, W1), ...], that defines the blocks of the blockchain subsequence returned by P, e.g. B(W2, t) ← B(W5, t+2Δt) ← B(W1, t+3Δt) ← ..., in the same order. Note that the timestamps of the blocks of the resulting blockchain subsequence will always be within the interval [t, t+(size(L)-1)*Δt].
Examples:
B(W1, t) ← P(t+Δt, 0, {W2, W3}) is equivalent to the blockchain subsequence B(W1, t) ← B(W2, t+Δt) ← B(W3, t+2Δt).
B(W1, t) ← P(t+Δt, 1, {W2, W3}) is equivalent to the blockchain subsequence B(W1, t) ← B(W3, t+Δt) ← B(W2, t+2Δt).
B(W1, t) ← P(t+Δt, 6, {W2, _, W3, W4}) is equivalent to the blockchain subsequence B(W1, t) ← B(W3, t+Δt) ← B(W2, t+2Δt) ← B(W4, t+4Δt).
B(W1, t) ← P(t+Δt, 2, {W2, _ x 5, W3}), which is the shorthand for B(W1, t) ← P(t+Δt, 6, {W2, _, _, _, _, _, W3}), is equivalent to the blockchain subsequence B(W1, t) ← B(W2, t+3Δt) ← B(W3, t+7Δt).
 
Okay, enough notation.

So by the end of this round (the last round of the maintenance interval), just after time t+199Δt, the three blockchains look like this:

Chain 1 (majority chain with length n + 196): Bbase ← Bf = B(W1, t) ← B(W4, t+3Δt) ← B(W5, t+4Δt) ← ... ← Bl,1 = B(W100, t+99Δt) ← P(t+100Δt, i1,1, {W1, _, _, W4, W5, ..., W100}),
where i1,1 is some index derived from the accumulated random value as of block Bl,1, and Bbase = B(n) for some positive integer n.

Chain 2 (minority chain with length n + 3): Bbase ← B(W1, t) ← Bl,2 = B(W2, t+Δt) ← Bv,2 = B(W2, t+k1Δt),
where 100 ≤ k1 ≤ 199 depending on the witness shuffling algorithm determined just after time t+99Δt using information by block Bl,2.

Chain 3 (minority chain with length n + 3): Bbase ← B(W1, t) ← Bl,3 = B(W3, t+2Δt) ← Bv,3 = B(W3, t+k2Δt),
where 100 ≤ k2 ≤ 199 depending on the witness shuffling algorithm determined just after time t+99Δt using information by block Bl,3.

The common fork point of all three chains is block Bf which has block number n+1.

By time t+200Δt, a new maintenance interval has started and therefore the votes have been updated and some witnesses might have been removed and replaced by others. This happens in each of the three chains despite the fact that chain 2 and chain 3 had only 1 block in the previous round, correct? Let's assume that no change of witnesses occur in chain 1. So the same witness W1 to W100 are still active in that chain, there is just a regular reordering since it is the start of a new round.

But in chain 2 and 3, we can imagine enough votes were accumulated in Bv,2 and Bv,3, respectively, that witnesses W1 and W4 to W100 were voted out and replaced by witnesses W101 to W198. Therefore, I assume that in the next round, a witness shuffling will occur for the sets {W2, W3, W101, W102, ..., W198} in both chain 2 and chain 3 (but each with their own random number used for the shuffling), correct? If so, the blockchain another 100 blocks later could end up something like this:

Chain 1 (majority chain with length n + 294): Bbase ← B(W1, t) ← B(W4, t+3Δt) ← B(W5, t+4Δt) ← ... ← B(W100, t+99Δt) ← P(t+100Δt, i1,1, {W1, _, _, W4, W5, ..., W100}) ← P(t+200Δt, i1,2, {W1, _, _, W4, W5, ..., W100}),
where i1,2 is some index that gives the "appropriate" permutation to satisfy the blockchain protocol rules for witness shuffling at this point in the blockchain.

Chain 2 (majority chain, at least for this past round since it has 91% witness participation, with length n + 94): Bbase ← B(W1, t) ← Bl,2 = B(W2, t+Δt) ← B(W2, t+k1Δt) ← P(t+200Δt, i2,2, {_ x 9, W2, W101, W102, ..., W190}).

Chain 3 (minority chain, since it only has 9% witness participation in this last round, with length n + 12): Bbase ← B(W1, t) ← Bl,3 = B(W3, t+2Δt) ← B(W3, t+k2Δt)  ← P(t+200Δt, i3,2, {_ x 91, W3, W191, W192, ..., W198}).

Here I assumed that nearly all (90 out of 98) of the newly elected witnesses chose to build on chain 2, while the rest (8 out of 98) decided to support chain 3. Notice that none of the 98 could build on chain 1 because they were active witnesses in that chain (the transactions that got them elected in were not included in chain 1).

Also notice that someone looking at just the past round would think that chain 2 is doing pretty decently since 91% of its active witnesses produced blocks in the last round (despite the fact that only 1% of witnesses produced blocks in the prior round). However, going by the longest chain metric neither chain 2 nor chain 3 win compared to chain 1, which is clearly the longest chain. Therefore, all full nodes following the protocol still stay on chain 1. If we assume that the remaining witnesses that are wasting their time on chain 3 decide to finally join chain 2, then chain 2 can theoretically achieve 100% witness participation from that point forward.

If witnesses W2, W3, W191, ..., W198 continue to always produce blocks for every round thereafter, and if witnesses W1, W4, ..., W100 continue to always produce blocks for every round as well, then the block height of chain 1 will increase by 98 each round while the block height of chain 2 will increase by 100 each round. So after r rounds, chain 1 has n1 = n + 294 + 98*r  blocks and chain 2 has n2 = n + 94 + 100*r blocks. We can see that n1 equals n2 at r = 100, or after 100 more rounds (or 10,000 more blocks in chain 2 or just before time t + 10199Δt) chain 1 and chain 2 are of equal length. In the next round afterwards, chain 2 will become the longest chain. However, by this point the common fork point between these two chains (which remember is at block Bf and block number n+1) is way more than 1000 blocks old. In fact it is at least 10094 blocks old. So none of the live full nodes will switch over to chain 2, correct?

Now some other user, Alice, runs their BitShares full node client and connects to the network. The last block they had synced in their previous session of using the client just prior to shutting down was block Bf which has block number n+1 and timestamp t+Δt. Now it is time t+10201Δt (10200 seconds, or 2.83 hours, later with Δt = 1 second) and Alice's client needs to sync the last nearly 3 hours worth of blocks. From Alice's perspective, the longest chain is chain 2 not chain 1. Her client was not online to see what happened between time t+Δt and t+10201Δt. The client has no way of knowing that everyone has decided to stick with chain 1 because the fork point was too old to trigger a blockchain reorganization. So after getting the two candidate chains, does Alice's client conclude that it must get stuck at block Bf and warn the user? At what time in the syncing process does it figure this out? Given the two chains, 1 and 2, stored locally that are in the state as I described it as of time  t+10199Δt, is the decision Alice's client makes deterministic or does it depend on when the client receives the corresponding blocks as it is syncing? Does the client keep going back and forth between chain 1 and 2 (using its undo history to constantly rewind back to Bf) as each chain alternates in winning over the other in longest length like a close horse race as blocks from each chain come in through the peer to peer network?

My understanding of how this should work is that the clients should be designed to get stuck unless they are able to sync to the head block that has a valid timestamp that gets as close as possible (relative to the time reported by NTP) without exceeding it AND where the common fork point of any other competing valid chains that occur in a block after the block that was last synced by the client by the end of the previous session is not too old to prevent reorganization. I believe that is the only time where it makes sense to keep moving forward with the longest chain with no error, and any other scenario requires the client to get stuck and warn the user (would you say this characterization of the problem is correct?).


Now, putting those questions aside for now, I think it is worth looking at a simpler scenario of just two chains and to try to determine what it takes in terms of honest witnesses and in average witness participation for the "good" chain to win in longest length prior to the chain reorganization limit being reached. So in this new scenario we will suppose that there are G good witnesses (Worig,1, ..., Worig,G) that start on the new chain as soon as they have enough votes to vote out the bad witnesses (Wbad,1, ..., Wbad,N-G), presumably because they are censoring transactions, and no one wastes time on any other minority chains. The newly elected witnesses (Wnew,1, ..., Wnew,N-G) along with the original good witnesses then produce blocks each round with an average f witness participate rate (1 - G/N < f ≤ 1). In that case we might have the following chains early in the process (just before time t + 2*N*Δt):

Bad chain (length n + 2*(N-G)): Bbase ← P(t, ib,1, {_ x G, Wbad,1, ..., Wbad,N-G}) ← P(t + N*Δt, ib,2, {_ x G, Wbad,1, ..., Wbad,N-G}).

Good chain (length n + k + G + N): Bbase ← B(Wbad,j1, t) ← B(Wbad,j2, t+Δt) ← ... ← Bfork = B(Wbad,jk, t+(k-1)*Δt) ← P(t+k*Δt, ig,1, {_ x (N-G-k), Worig,1, ..., Worig,G}) ← P(t + N*Δt, ig,2, {Worig,1, ..., Worig,G, Wnew,1, ..., Wnew,N-G}),
where 0 ≤ k ≤ N - G.

At this point the bad chain is the longer chain as long as N - k - 3G > 0. If the good witnesses get lucky (k = N - G), then the good chain will already be the longer chain by this point in time. While we cannot tell what k will be (since it depends on the witness shuffling and thus on the random values), we can calculate the probability distribution p(k) assuming uniformly random permutation. The distribution is p(k) = (G * (N-G)! * (N-k-1)! ) / ( (N-G-k)! * N! ) for 0 ≤ k ≤ N - G, and so the expected value for k should be ∑ k*p(k) from k = 0 to (N-G), which according to WolframAlpha is (N-G)/(G+1). So substituting that expression for k in the inequality and manipulating the terms a little gives the inequality 3*G - N  + 4 < 0, or N > T(G) where T(G) = 3*G + 4. When N > T(G), then at this point in time the bad chain will still be the longer chain, but if N < T(G), then at this point the good chain will have already become the longer chain. With N = 100, T(G) first becomes greater than N at G = 33.

Despite estimating the expected value for k, I will proceed with the worse case assumption that k = 0. Assuming the bad chain is still the longer chain, we can see what happens after several rounds. We should expect the bad chain to add (N-G) blocks each round, but we will only assume the good chain can add f*N blocks each round. So after r additional rounds (just before time t + (2*N + r)*Δt) the length of the bad chain is n + (2+r)*(N-G), and the length of the good chain is at least n + G + N + r*f*N. The difference between the length of the good chain and the length of the bad chain after r rounds is approximately D(r) = 3*G - N + r*(G - (1-f)*N). We can see that D(r) will be positive under the constraints (1-f)*N < G < N/2 and 0.5 < f < 1 when r > (g^2 - 2*f*g+ 2*f - 1)/(f+g-1) where g = G/N. We need D(r) to become positive (so that the good chain becomes the longest chain) before the fork block, Bfork at block number n+k, becomes older than R = 1000 blocks. This means that as soon as (2+r)*(N-G) > R, which is at about r = R/(N-G) - 2, D(r) must be positive which thus requires that r = (R/N)/(1-g) - 2 > (g^2 - 2*f*g+ 2*f - 1)/(f+g-1), or R > N*(1-g)*(2 + (g^2 - 2*f*g+ 2*f - 1)/(f+g-1)). This can also be rewritten as f > (g^2  - 2*(1-g) + R' - 1) / (R'/(1-g) - 2*(2-g)), where R' = R/N, and assuming R' > 2*(1-g)*(2-g) which given the previous inequality holds true for f > 2/3. So with f > 2/3, which means g < 1/3 (for g > 1/3 the good chain wins without any extra rounds being necessary), and the constraint that 1-g < f < 1, we find that R' > (-g^3 + g^2 - g + 1)/g. 

So with R = 1000 and N = 100 (thus R' = 10), the minimum value of g necessary to guarantee the good chain will become the longer chain prior to the chain reorganization limit being reached is approximately 0.0916. This means at least 10 good witnesses out of the total 100 witnesses are needed. And the minimum value of f needed as a function of g is given by this plot. With G = 10, the average witness participation needs to at least be 98.62%. With G = 15, we get a more sensible value of 90.8% as the minimum witness participation needed to ensure the good chain becomes the longest chain prior to the chain reorganization limit being reached.

Pages: 1 2 3 4 [5] 6 7 8 9 10 11 12 ... 81