Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - arhag

Pages: [1] 2
Stakeholder Proposals / Dividends with Confidential Transactions
« on: July 20, 2015, 11:15:45 pm »
I think we need a nice dividend feature that only requires the issuer to spend a fixed transaction fee rather than transaction fees that scale with the number of recipients of the dividend. While that feature is not strictly necessary to distribute dividends to a snapshot, it is incredibly useful.

However, if the issuer wants to distribute dividends to assets that are blinded using Confidential Transactions, the snapshot and sharedrop method does not work because the issuer cannot know the amounts of the asset each recipient holds. It is in theory possible to distribute dividends to asset holders who blind the asset amounts (that is what I will discuss in this post) but of course it requires new operations. So I strongly encourage a dividend feature that can be used to sharedrop any asset on to any other asset whether blinded or not.

Blinded amounts work through a Pedersen commitment (C) of the asset amount (v, taken to be non-negative) using a random blinding factor (b), e.g. C1 = b1*G + v1*H, where G and H are two different generators that are points on the elliptic curve and H = z*G for some unknown (to anyone) scalar z. A range proof proves that the integer v is within some specified interval (typically we will use the interval [0, 264)) without revealing v or the blinding factor b (the prover needs to know v and b to construct the proof). This is important because we want the fact that the sum of multiple (where multiple means a number less than n/264, where n is the order of the curve, which is practically guaranteed given how large the curve order is) input commitments equals the sum of multiple output commitments to imply that the sum of the values of each of the input commitments equals the sum of the values of each of the output commitments. And that won't hold if there are overflows. So the range proofs on each of the output commitments (the input commitments are assumed to already be in range from either an earlier range proof or because their values were publicly known) guarantees that there won't be any overflows.

Any time someone creates a dividend event, a virtual snapshot is taken on all balances in the database of the asset that the dividend is being sharedropped on. Each asset balance is not just one quantity but rather a current quantity and a (potentially empty) map from dividend events to quantities. (Note when I say quantity here, it can refer to either a plain-text quantity or a Pedersen commitment or even both). The timestamp of the last update to the balance determines how the blockchain should treat any modifications (deposits and withdrawals) to this balance. If the timestamp is not more recent than the most recent dividend event targeting that asset, then the blockchain first needs to add items to the map for each of the dividend events (as the key) that occurred targeting that asset since the timestamp and with the value of each item to the current quantity. Then the blockchain can modify the current quantity as necessary and of course update the timestamp. Each item in the map technically represents a separate asset (it is the ephemeral asset that is used to withdraw the actual dividend asset that rightfully belongs to the user). These ephemeral assets cannot be traded or transferred, they can only be used to withdraw the dividend. After withdrawing the dividend, that item is removed from the map. The item can also be removed from the map anytime after the dividend event referred to by the key has expired (typically such expired items will be checked and removed next time the balance needs to be modified, but perhaps they might also be purged globally at every new maintenance interval).

One thing to be careful with is how to treat assets that are in smart contracts rather than held by accounts. I'm not sure what should be done generally. But at least for assets held in open orders, the quantities in the maps of these orders should be merged in with the correspond map in the account that owns these orders when the orders are cancelled or completely filled. Since the quantities in open orders should be plain-text, it is possible for the blockchain to merge these values through a simple sum without any cooperation needed from the owner (it is actually possible to merge even if the amounts were blinded by just summing the commitments since the owner could calculate the summed blinded factor).

A dividend event records the total aggregate supply (S) of the target asset at the time of the dividend. It also records the total amount of the dividend asset (D) which is "destroyed" as part of the dividend event.  It is very simple for a plain-text withdrawer to withdraw their dividend using the corresponding ephemeral asset (of amount a). The simply use that ephemeral asset (and destroy it in the process) in a dividend withdraw operation to claim an amount (a*D)/S of the dividend.

For a blinded withdrawer to withdraw their dividend the process is a little more complicated. They again use/destroy the ephemeral asset (this time it is a commitment C1) in a dividend withdraw operation to claim some dividend with commitment C2. However, to do this a few things are necessary. First, they must also include a third commitment C3 in their operation. They must include range proofs for both C2 and C3. And finally, the blockchain must verify that S*C2 + C3 == D*C1 (mod n). (Alternatively, the withdrawer can just reveal the amount of C3 and in that case only one expensive range proof is necessary, but I am not sure what kind of privacy leaks revealing the value of C3 may have. So, it is probably better to just accept the cost of the two range proofs for the extra assurance on privacy.)

Now I will prove that the withdrawer can generate C2 and C3 satisfying the above requirements given that they know b1 and v1 where C1 = b1*G + v1*H, and that the above requirements make sense for a dividend distribution:
S*C2 + C3 == D*C1 (mod n)
S*(b2*G + v2*H) + (b3*G + v3*H) == D*(b1*G + v1*H) (mod n)
(S*b2 + b3)*G + (S*v2 + v3) == (D*b1)*G + (D*v1)*H (mod n)
Therefore, b2 == S-1*(D*b1 - b3) (mod n),
and  S*v2 + v3 == D*v1 (mod n).

Given that the values S, D, v1, v2, and v3 are all less than 264 and n is greater than 2128, then if we further assume that v3 is less than S, we can conclude that v2 == (D*v1) / S and v3 == (D*v1) % S, which is the way the withdrawer calculates those values. And even if v3 is not less than S, that only hurts the withdrawer because it reduces the value of v2, which is the quantity of the dividend asset the withdrawer gets to withdraw. Also, the withdrawer can choose any random value for blinding factor b3, and from that can calculate b2 = S-1*(D*b1 - b3) (mod n), where S-1 is in the interval [0, n) and is the inverse of S such that S-1 * S == 1 (mod n). With all the blinding factors and values known, it is possible for the withdrawer to calculate the commitments C2 and C3 and their range proofs.

There will likely be some recipients of the dividend who held the target asset as a blinded amount and who do not bother to go through the withdraw process. Even if they all did, because of rounding errors there would be some amount of the dividend still left unclaimed. If we were not dealing with blinded amounts, it would be possible to known exactly how much was unclaimed by the expiration time and the issuer could take the unclaimed amount back. However, because of the nature of blinded amounts, it is not possible for anyone to know how much of the dividend is left unclaimed. Therefore, we treat distributing the dividend as a two part process where the full amount of the dividend is first destroyed at the time of the dividend distribution and then later some of that destroyed amount is automatically reissued through the withdraw process. However, because of rounding down and some recipients not bothering to withdraw by the expiration time, that new amount will be less than the amount destroyed. But anyone calculating the supply must assume that the full amount of the dividend is still circulating (even though some unknown amount of it is forever inaccessible).

This is a bit troubling when the dividend asset is a BitAsset. It means that it will be impossible to fully unwind the BitAsset supply to zero. But that was already the case if we assumed some users would lose their private keys. When a particular BitAsset becomes obsolete and naturally unwinds to a very small supply, it won't take too much longer for a forced settlement to eventually occur allowing the supply to finally unwind to zero. In that case, there will be some amount of BTS held by the blockchain that will never be claimed because there aren't any remaining accessible BitAssets to claim them. To prevent blockchain bloating, there can be an expiration time (of several years) for users to claim the collateral asset of their settled BitAssets. After that expiration time, the remaining BTS held in the pool can be destroyed along with all the other metadata that was necessary to store in the database to allow people to claim the collateral.

The dividend feature is also useful for any UIA holders who want to transition from allowing their UIAs to be used with Confidential Transactions (which necessarily prevents them from being able to seize the UIAs) to disallowing it (for example because they now want to be able to seize the assets or keep track of the amounts their customers hold). They would first issue a new UIA to eventually replace their old UIA. It would forbid its use in Confidential Transactions but otherwise be identical. The issuer then does a 1-to-1 dividend of the new UIA dropped on the old UIA. Right after that moment, the old UIA becomes worthless (by decree of the issuer anyway) and all of its value goes to the new UIA. It is probably best for the UIA issuer to halt all trading of the asset prior to the dividend. Holders of the old UIA at the moment of the dividend still have some time (likely more than a year) to withdraw the new UIA before the dividend expires. However, to actually withdraw, they are forced to reveal the blinded amounts of their UIA since they cannot use blinded amounts with the new UIA. After the dividend expires, the issuer can then destroy the old UIA (I assume that fully destroying a UIA is a different permission than seizing it, because even when some people use blinded amounts with the UIA it is still technically possible to destroy the UIA even if it is not technically possible to seize it), which frees up any old blockchain metadata that was only supporting the old UIA. Through this process, the public is also able to know the true supply of the UIA rather than just an upper bound.

Another useful way to use the dividend feature is to change the precision of the asset (or conceptually change the unit of value represented by a single satoshi of the UIA). The issuer can create a new UIA that is identical to the old one except it has different precision (either actually and/or conceptually). Using a mechanism similar to that described in the previous paragraph (although no need to disallow blinded values), the issuer can cause an instantaneous transfer of value from the old UIA to the new UIA, but allow users to transition from old to new over a much longer period of time. A procedure like this would be very useful (especially if automated) for a UIA that inflates at a fast rate (and doesn't deflate) as a mechanism of redistributing the value of the asset from some holders to other holders (e.g. a UIA used as a reputation coin in an Augur-like decentralized oracle).

I'm looking for good documentation about the consensus protocol of DPOS (either BitShares 0.x or 2.0 is fine, but I would prefer the 2.0 one) and how nodes decide upon the particular blockchain to use. Specifically, I want to know everything about blockchain reorganization. When does a client, given some existing validated blockchain stored locally, accept a reorganization and when does it not? I want to understand all the possible failure modes we are aware of and how they can be achieved. How does the timing of which blocks are received in which order and by when (relative to the syncing process) change the outcome of which blockchain a client ends up choosing? Under what conditions can a full node get stuck and be unable to sync forward? Are all such cases bugs or are there cases in which this is the legitimate behavior? If is is the latter (which I believe to be the case), is the only way to fix the problem to add a trusted checkpoint to the client? If the problem can be fixed by resyncing the blockchain from scratch (which I have noticed had been recommended in the past as a solution to these stuck blockchain problems that had existed before) is that not an indicator of a bug in that particular case? In other words, if it wasn't a bug and got legitimately stuck, shouldn't it get stuck in the same place again (otherwise how is such non-deterministic behavior not a bug)?

Could we modify SMF (or is there an existing way) to get the markup of a post (the stuff you see when you Quote it) even if it is in a locked thread? Sometimes I find I want to reference a part of an old post but I can't (without trying to recreate the markup) because the thread has since been locked.

This could also then make it possible to write a bot to backup all of a user's posts which is something I am interested in (anyone know of an existing bot compatible with SMF that would do that?).

Stakeholder Proposals / Witness surety bonds
« on: June 29, 2015, 11:16:31 pm »
I think in addition to the (relatively small) fixed registration fee for witnesses, a witness should be required to deposit funds (of some fixed amount specified by the network) into a surety bond for them to be considered a valid candidate. These funds can be withdraw if the witness decides to retire (perhaps temporarily) but there would be a 2-week delay in that process (the retirement, meaning no more block production, would be immediate or at least would occur by the next maintenance perod, but the fund withdrawal would be delayed). A registered witness without a bond is not a valid candidate witness. People can still vote for those witnesses, but no matter how high their approval gets they will never become an active witness if they don't have a sufficient bond posted.

In addition to voting for a witness, stakeholders should optionally be able to vote to ban a particular witness. If the amount of stake voting to ban a particular witness exceeds some threshold, e.g. the median approval votes for active witnesses, the blockchain will ban that witness. Banning means that the blockchain takes away the funds in the surety bond from the witness and prevents that witness account from ever becoming a candidate witness again (obviously the person behind that account can always register a new witness account).

With this change there would be a financial incentive to not misbehave (for example by double signing blocks) even for witnesses who don't care about ever being a block producer again. It would also provide a mechanism for witnesses to legitimately retire without having to ask stakeholders to vote them out or compromise block production.

A more advanced alternative to banning witnesses would be to allow anyone to provide proof of a double sign and have the network do the banning automatically if a valid proof was provided (and also reward the proof provider from some fraction of the surety bond). A hybrid approach might also be desirable. For example, if the ban votes for a witness exceed some low threshold, then a double sign proof is enough to get the witness banned. If the ban votes exceed some higher threshold, the witness is banned without any crypographic proof required. This modification requires ensuring that the block signing protocol is designed in a way that small double signing proofs can be submitted and validated (I'm not sure if the current protocol is designed to support that).

Stakeholder Proposals / House cleaning
« on: June 27, 2015, 04:25:13 pm »
With the move to BitShares 2.0 I think some delegate house cleaning is in order.

First, I think it is really important that we identify how many unique individuals there are who are actually running delegates (and standby delegates). There are many people who are running multiple delegates in order to help out others without the technical skill or the time to run and maintain a node. This is no longer necessary with BitShares 2.0. Any nodes run by the same person are a waste of money since they add cost without adding any extra decentralization of control. It makes more sense to have 30 witnesses each controlled by a unique person than 30 unique people collectively running 101 witnesses. And with BitShares 2.0 we finally have the flexibility to do that.

Second, many 100% delegates can either be consolidated or entirely removed. The referral rewards program is the replacement to marketing delegates. Do you think it would be premature to start the process of voting out those delegates that are no longer needed due to the referral program, or should we wait until BitShares 2.0 has officially launched first? Also, there is no need to have 100% delegates for all the individual devs who will be working for Cryptonomex. This will all be replaced by a worker proposal for a specific project that the Cryptonomex team will be working on with specified payment amounts, periods, vesting schedule, etc. This is the new standard for workers. There needs to be a clear proposal for how much money is needed to accomplish a specific task in some period of time. So, I think that even the 100% delegates that the community may want to keep as workers need to re-articulate their mission with a well thought out worker proposal and again present it to the community for a vote.

tl;dr Basically ICANN on the blockchain but owned and controlled by UIA holders and also where new names can either be put on auction immediately or be purchased by the name claimer for a fixed fee that depends on the quality of the name (usually meaning the length). The fixed fee structure (and other properties of the system) can be controlled by a committee elected by the UIA holders (assuming a necessary quorum agree and with a 2 week delay just like with delegates), and it would be designed to force name claimers of shorter names (which are likely to be higher valued) to always prefer the auction route instead but name claimers of longer names (which are more likely to be used for account names) to be purchased for a reasonable fixed fee. The committee also can collectively act as a judge (or more realistically delegate responsibility to multiple trusted judges) who run their own private (and profitable) court system to listen to claims of trademark infringement (if that is the policy UIA holders choose to adopt) and can enforce their ruling by forcefully taking away a name from a current owner and giving it to someone else (assuming the necessary quorum is reached and again with the 2 week delay which gives the UIA holders the opportunity to veto the action by voting out members in the committee).

I would love to see the following experiment in one of the namespaces that could be potentially used for domains and even account names (see my post discussing various economic models that could be used for different namespaces). This namespace could be used to point to web sites and/or point to accounts (possibly in addition to whatever other account name system also exists).

The idea behind this naming system is to not have ownership only be determined by a set of rules on the blockchain but also by decisions by a group of elected people. These people can act as arbitrators in matters that the blockchain either cannot have knowledge of and/or cannot have the intellect to decide what is fair and just according to the vague human policy it is supposed to follow. That means this naming system could in theory respect existing trademarks making it more attractive to corporations, but it does not need to be limited to following the policies that ICANN currently follows or even necessarily follow the IP laws of a particular country (in that case, you better hope the necessary quorum to make decisions on the blockchain cannot be reached by a subset of the known elected arbitrators residing in the jurisdiction of that country).

There would be a special UIA that represents ownership of this naming system itself. The UIA holders can vote for the number of elected arbitrators (N) to have and the arbitrator candidates themselves. This would be the same system used for selecting delegates and the number of delegates to have. The top N arbitrator candidates would become the elected arbitrators and their weight would be determined by their relative approval votes (just like in the delegate system). If the necessary quorum (weight greater than some defined threshold) of elected arbitrators agree, they can forcefully move ownership of one of the names to another account. This could be useful in giving the legitimate trademark owner their name. Another possible action they can take with the proper quorum is to take away an existing name from an owner and putting it up for auction. Just like with the delegates, any action taken by these electors has a 2 week delay allowing the stakeholders the option to veto the action by voting out enough of the arbitrators.

All new names would be classified by a computer algorithm into the class they belong to. The algorithm would typically make the decision based on the length of name. A user trying to register a new name would either have the option of paying the fixed fee specific to the class the name belongs in to own the name, or putting it up on an open auction where they could then try to win the name by being the highest bidder. The user would have to pay a very small fixed fee (potentially growing with the length of the name) just for the privilege of bringing a new name into the system and getting the exclusive choice to either pay the fixed fee for the name or put it up on auction (this fee is to always at least compensate the network for the technical expense of managing the names, even if the market price ends up being 0). If the user believes the market will value the name at a lower price than the fixed fee for its class, it is rational to choose to put it up on the auction rather than paying the fixed fee. The fees for each class can be set and adjusted by the arbitrators (again with the 2 week delay).

I would expect really long names to belong to a class with a low fixed fee, so regular people could afford to get their full names (maybe with a number at the end) for low cost and use them as their account names. People with short first and last names (or a short pseudonymous handle) that are nevertheless uncommon would likely put the name up on auction and end up paying around the same fee as the low fixed fee (they just would have to wait 30 days before they were able to get their name). Short names would belong to a class with an intentional outrageously high fixed fee, forcing everyone who wants to get such names to choose the auction route instead. The auction system would try to fairly price the name. If the name is a high-valued name already owned by a big company, it is likely that the auction will not fairly capture its market value because the people with the big money to bid it up would not yet exist on the network by the time the naming system opens up. Because these would be well established names of big companies, the arbitrators would likely decide to take the name away later when the trademark owner complained, and this fact would likely cause squatters to think twice about paying a moderate fee to "own" this name.

I would like to see how such a system evolves. The arbitrators would likely act as judges (or rather delegate that task to selected judges they trust, which also allows them to scale their operation) running their own small courts where they listen to both parties and examine the evidence to see whether the prosecutor has a legitimate claim on the name (legitimate by the policy set by the UIA holders of course). The UIA holders would want to set a policy that was reasonable and mostly respected existing trademark laws since it would make their name system more attractive to people and therefore give it more value. The judges would get paid by the prosecutors who would have to put up funds to compensate the judges for taking the time to listen to their case as well as to set aside funds that would pay the defendant (the existing owner of the name that they claim belongs to them) for their lost time in the case the judges rule against the prosecutor. A few other major differences I would expect with this system compared to regular courts is that the entire thing could be done virtually using the internet (making it easier for all parties involved to "meet") and of course the defendant would not go to jail for "contempt of the court" due to not showing up or being rude (the worst possible thing that can happen to them is that they lose the name with absolutely no compensation). So I think this would be an exciting use case to try out the concept of private justice and see how well it actually works in practice (at least in this narrow field of trademark disputes).

The fees on the prosecutor may be set so that after compensating everyone needed there would be extra funds that could be placed into this naming system's special reserve pool (different than the BitShares reserve pool). I would expect the fees to be set pretty high to avoid spreading the judges too thin and to just focus on the high value cases dealing with the trademarks of medium to large companies (smaller trademark claims could probably be more cost effectively settled out of the courts between the existing name owner and the name claimer, but would also be less likely to be necessary since lower value names are less likely to be taken by squatters and also due to other methods we can devise [1]). The reserve pool would also have an income stream coming from the paid fixed fees and the funds collected from the auctions. It would also have an expense stream that paid the arbitrators some daily salary (which could be adjusted by the necessary quorum of the arbitrators but with the 2 week delay) to compensate them (and also allow them to compensate judges they may have delegates tasks to) for the ongoing costs they have regardless of whether any cases come up (and to motivate them to be available to do their job properly when a new case does come up). Finally, the arbitrators could (again with the 2 week delay) pay out a dividend from the reserve pool to the UIA holders. Ideally, I would like to see dividend support built in to the blockchain as an operation for any UIA. But short of that, the 2-week delayed operation might just move the funds from the reserve pool into another account collectively managed by the arbitrators without delays that would then be responsible for manually transferring those funds in proportion to a snapshot of the UIA holders taken on the block in which those dividend funds were first moved out from the reserve pool.

[1] One other thing the arbitrators might do (assuming the UIA holders allow) to help reduce squatting is to run an automated system on the side that does the following. It allows someone to submit a form requesting unauthorized registration of an already established domain name by providing the ICANN domain name of the same name (with some set of accepted TLDs). If the claimed name meets certain conditions (perhaps less than a certain length) and also the domain name was registered in the ICANN system (with one of the appropriate TLDs) before the name was registered on the blockchain, it then moves on to the next stage. The next stage involves providing a random large number to the claimer and requesting that they put that number in its own file accessible from a specified URL using the ICANN domain name. After the claimer does this and provides the URL back to the automated system, the automated system checks that the numbers match and if so moves on the next stage. This next stage involves waiting for some period of time for any counter claims. People who own the same name with another higher-ranked TLD (the appropriate set of TLDs this system recognizes will be ranked) can go through a similar process for their counter claim to either maintain existing ownership on the blockchain or to take ownership of the name themselves if they don't have it already. If the period ends without any legitimate counter claims, the system automatically coordinates with all other arbitrators (each one could be running this automated system and the claimer would be using a client that submits this info to all of their servers) to sign a delayed transaction that automatically takes that name from the current owner and puts it into an auction. This allows the claimer to own the domain by paying the fair market price for the name (which could be cheaper than the fees they need to pay to try to take it to arbitration courts directly). It means that the existing owner of the domain cannot extort the claimer by demanding them to pay higher than the fair market price. For this reason, the existing owner will be willing to sell the name to the claimer (assuming they prove they actually own the ICANN domain) for much lower than the market price because at least then they get some money out of it. This reality will make squatting on existing domain names far less likely even for small companies that could not afford the court fees. And the squatters speculating on names that are currently not owned by anyone in the ICANN system would be paying for the fair market price (at the time they claim it) because of the auction. So if they make any profit at all from that name, it is because they had the foresight to claim a name that no one else was using at the time and that no one else thought would be as valuable as it ended up being. And that isn't evil squatting but rather legitimate speculation. Also, the existence of this mechanism doesn't cannibalize the revenue source for courts. Those are focused on high value names where the market price of them would likely be higher than the court fees (therefore it is rational for those name claimers to settle the dispute in court and get the name rather than using this automated system to then have to bid on the name). However, it is also important to consider that from a squatters perspective they know that a name can be taken from them by the legitimate owner for only the cost of the high court fees. Therefore, it is in their best interest to just sell the name to the legitimate owner for a lower cost than the court fees. This also means that it wouldn't be rational to pay more than the court fees in an auction for a name legitimately owned by someone else. Thus these mechanisms effectively put an upper bound (worth approximately the cost of the court fees) on the cost of any given name.

Random Discussion / BitShares Poem
« on: June 16, 2015, 08:26:36 pm »
So, this thread unexpectedly got me in the mood to write some poetry about BitShares (which is completely uncharacteristic of me). I was just messing around for fun, but I liked the end result so I decided I'll share it with the community.

From the head of one bytemaster
Came ways to avoid disaster
In the cryptocurrency space
While also making blocks faster

The culprit was the proof of work
The whole concept was just berserk
Burning power for consensus
Just ASIC makers get the perk

More centralizing mining pools
Showed status quo would be for fools
A new blockchain would be needed
Made possible with brand new tools

First delegated proof of stake
BitAssets were icing on cake
Decentralized, fast and so cheap
Many of us now wide awake

The market peg just seemed to hold
We now had true digital gold
And though core stake was volatile
BitUSD price stayed controlled

Now more brilliance is on the scene
Since soon we upgrade to Graphene
Transactions per second so high
And brand new codebase that is clean

Permission system that makes sense
Referral program adds up cents
And stakeholders retain control
Their votes control blockchain expense

Who knows what else remains in store
With brilliant devs that we adore
Bringing us liberty through code
Come join us as we all explore

Stakeholder Proposals / Adjustments to delegate pay
« on: April 10, 2015, 01:53:23 am »
I really think we need improvements to the way we pay delegates and workers. Here is my latest suggestion to get a little closer to the ideal system by making minimal changes.

First, instead of each delegate candidate only having a single pay rate, they would have the following:
  • Percentage budget for worker distribution
  • Budget limit for self-pay
  • Salary

The percentage budget for worker distribution (PBWD) would be a percentage between 0 and 100% (just like the current delegate pay rate) and budget limit for self-pay (BLSP) would be a BTS amount. The actual budget for worker distribution (BWD) for an active delegate, which is in units of BTS, would be calculated by multiplying the delegate's PBWD and the per-delegate BTS allocated for that round (this is the per-block inflation limit as of that round plus any per-delegate fees allocated for distribution in that round). Another important quantity, budget for self-pay (BSP), is defined for each delegate as min(BWD, BLSP). The registration fee for a delegate is based on the BLSP value, not the PBWD value. After registering the delegate, the delegate can not increase the values for PBWD or BLSP, but they can change the Salary however they wish at any time. The PBWD and BLSP can be decreased at any time, however the change to these values do not go into effect until the end of the round.

The salary of a delegate would specify a particular BitAsset type (or alternatively BTS) and provide a quantity in that BitAsset. The idea behind the salary is that the blockchain will multiply the quantity with the latest price feed for the corresponding BitAsset to get an estimated salary per round in BTS units. If that number is larger than the BSP quantity, then the salary will be limited to the BSP quantity. The actual salary (SAL) in BTS units to transfer to the delegate is calculated at the beginning of the round using the latest price feeds at the beginning of the round. If there are no recent price feeds at the beginning of the round for the specified BitAsset, that delegate will simply not be paid in that round.

At the end of the round, the total salary (TOTAL_DELEGATE_SALARY) distributed to delegates that produced blocks is calculated (TOTAL_DELEGATE_SALARY = sum of the SAL for each delegate that produced a block in that round) and the sum of each active delegates' BWD (TOTAL_AVAILABLE_BUDGET) is also calculated. The difference TOTAL_AVAILABLE_BUDGET - TOTAL_DELEGATE_SALARY gives the remaining worker budget (WB). Then, the WB is distributed according to the "worker pay list" as of the end of that round.

The "worker pay list" is a list of the PAY item which is a tuple (BITASSET_QUANTITY, BITASSET_TYPE, ACCOUNT_ID) or the LIMIT item which only has one value BTS_LIMIT_QUANTITY. Similar to delegate pay, the BITASSET_QUANTITY and BITASSET_TYPE together define the pay amount and are converted into the appropriate amount of BTS using the latest price feeds at the time (this time being at the end of the round). So you can think of this process as mapping the list [A] to list [c] with a function f : a -> c which maps PAY (BITASSET_QUANTITY, BITASSET_TYPE, ACCOUNT_ID) to PAY' (BTS_QUANTITY, ACCOUNT_ID) and maps LIMIT (BTS_LIMIT_QUANTITY) to itself. Again, if there is no recent price feed for the specified BITASSET_TYPE of an item, then the BTS_QUANTITY of the mapped tuple of that item is simply set to zero. At the end of the round, the blockchain goes in order through this second list (the [c] list) and if the item is the PAY' type, it pays BTS_QUANTITY to the specified ACCOUNT_ID as long as there is enough value in the WB, which is updated (WB := WB - BTS_QUANTITY) as it goes through the list, and if the item is the LIMIT type it burns (WB - BTS_LIMIT_QUANTITY) amount of BTS (assuming that it is a positive number) and sets WB := min(WB, BTS_LIMIT_QUANTITY) before proceeding with the rest of the list. If it reaches an item in the list where the BTS_QUANTITY exceeds the updated WB at that point, it will simply send WB amount of BTS to the ACCOUNT_ID and stop processing the rest of the list.

The "worker pay list" allows the blockchain to prioritize who to pay when money is tight. A typical set up would be to have multiple rounds of worker pay in the list. For example, if there were 3 workers (A, B, C), one possible "worker pay list" might be the following:
  • LIMIT (3500)
  • PAY (3, BitUSD, A)
  • PAY (1, BitUSD, B)
  • PAY (6, BitCNY, C)
  • PAY (1, BitUSD, A)
  • PAY (1, BitUSD, B)
  • PAY (6, BitCNY, C)
  • PAY (1, BitUSD, B)
  • PAY (6, BitCNY, C)
  • PAY (1, BitUSD, B)
So with the above list, if there is enough money to pay the entire list, both A and B would receive 4 USD worth of BTS per round (approximately $125,000 per year) and C would receive 18 CNY worth of BTS per round (approximately $90,000 per year). But let's pretend that money got tight (the price of BTS went down for example) and the DAC would only afford to pay up and including the 6th item of the list fully, the 7th item only partially, and the rest couldn't be paid at all. Let's say approximately $7.25 worth of BTS was available in the worker budget for each round (and let's assume that amount worked out to less than 3500 BTS). In that case, A would get the full 4 USD worth of BTS per round, B would only get half their normal pay or 2 USD worth of BTS per round, and C would perhaps get approximately 7.5 CNY worth of BTS per round. Also the LIMIT could further limit how much each worker gets paid. If the price of BTS were to drop to $0.003/BTS and the WB was 4800 BTS at the very beginning of the list, then if the LIMIT item wasn't there there would be enough BTS in the budget to fully pay the three works (only 4000 BTS would be needed in WB). However, with the LIMIT item at the top of the list, the 10th item would only receive partial pay.

Finally, there needs to be a way to update the "worker pay list" (which is initially set to the empty list). Any active delegate can submit a proposal for a change to the "worker pay list" and they can cancel their submission at any time. If they submit a new proposal when they already have a pending submission, the old proposal will be cancelled and replaced by the new proposal. Each active delegate is also allowed to submit a vote for any delegate's proposal (by submitting their delegate ID) but the vote only counts if it is for a currently active delegate who currently has submitted a proposal. Again, they can cancel this vote at any time and submitting a new vote automatically cancels any previous vote. Also, if the delegate they are voting for cancels their proposal (or it is automatically cancelled by submitting a new proposal), all of the delegate votes voting for that proposal also automatically cancel. If enough active delegates (let's say >=76) vote for the same delegate proposal, that proposal will become ratified. When this happens, the proposed changes from the ratified proposal update the "worker pay list", then the proposal submission is automatically cancelled and all of the active delegate vote submissions are also cancelled.

These proposals consist of a sequence of operations. There are four types of operations: ADD_ITEM, MODIFY_ITEM, REMOVE_ITEM, and REMOVE_ALL. The REMOVE_ALL operation removes all of the items in the current "worker pay list". The ADD_ITEM, MODIFY_ITEM, and REMOVE_ITEM operations require a non-negative integer which specifies the zero-based index of an existing item (and the integer equal to the number of items in the list is also acceptable for the ADD_ITEM operation). REMOVE_ITEM simply removes the item specified by the index from the list (shifting all items coming after it up to close the gap). ADD_ITEM adds an item immediately before the item specified by the index to the list (shifting it and all items coming after it down to make room) or it just adds the item to index 0 if the list was empty. MODIFY_ITEM does not add or remove items but rather simply modifies the values for the item specified by the index. ADD_ITEM and MODIFY_ITEM operations obviously need an additional argument to define the new values of the item which can either be just a BTS_LIMIT_QUANTITY for a LIMIT item or the tuple (BITASSET_QUANTITY, BITASSET_TYPE, ACCOUNT_ID) for a PAY item.

So with the above setup, it would take a super majority of the delegates to make changes to worker pay and even then they could not inflate BTS in each round by more than TOTAL_AVAILABLE_BUDGET which is simply the sum of each active delegate's BWD (which cannot exceed the BWD they were elected with). Also, there is still the hard-coded dilution limits that guarantee that TOTAL_AVAILABLE_BUDGET <= 101 * (hard-coded per-block inflation limit). The idea is that the delegates will listen to the shareholders through future non-binding proposals and make the appropriate changes that the shareholders want. Until the appropriate tools for non-binding proposals are available in the client, the delegates should just do what they think is best for BitShares.

The delegates can still get paid individually even if a super majority is never able to agree to any "worker pay list" so there is no new added risk of delegates not getting paid with this system. The delegates get the convenience of specifying their pay in USD (or other BitAssets), even though they are still actually paid in BTS, which means they do not need to manually burn excess BTS to get paid their fair salary (the blockchain will automatically do it for them). The delegates just need to make sure that the BSP they are elected with is high enough to give them their desired USD salary even as BTS price changes. And if BTS price drops so low that they cannot get their desired USD salary due to BSP limit, they are simply forced to deal with the lower pay as they are now (or maybe they can get a new delegate voted in with a higher BSP).

General Discussion / Consistency in naming: Is it bitAssets or BitAssets?
« on: February 04, 2015, 06:21:37 am »
I know this is a pretty minor issue but I would prefer to resolve it.

I have seen people using both versions. I would prefer if we were consistent with the capitalization in all official material. For example, bytemaster uses BitAssets and BitUSD, BitGold, etc. in his blog. The BitShares client also uses the capital B format. But the website uses bitAssets, bitUSD, bitGold, etc. (although there are some places where the website mistakenly uses the capital B format).

Personally, I prefer BitAssets, but whichever way the consensus goes is fine with me. I just want consistency.

Which capitalization do you prefer?

I think the lightweight client will need to talk to many delegate servers to make sure it can actually trust the information it is getting. Even better would be to use an approach similar to what Ethereum is taking where the entire state of the database is encoded with a single hash (root hash of the Merkle Patricia tree). The most recent snapshot of the database state would be taken by each delegate (delegates would coordinate so they are working with the same recent snapshot), they would calculate the root hash (SNAPSHOT_ROOT_HASH), they would verify with one another that they all have the same hash, work together using a threshold signature scheme to generate a Schnorr signature on HASH(CHAIN_ID || SNAPSHOT_BLOCK_HEIGHT || SNAPSHOT_ROOT_HASH), and then include that in the next block. Then they would repeat the process all over again with another most recent snapshot of the database state.

Either way, the lightweight client will need some way of knowing who the top 101 active delegates are at the moment. They need to be able to confidently obtain this fact even if they have been offline for a while. This proposal discusses one way this can be done while providing decent confidence to the lightweight client user that they aren't being tricked.

I propose that we tweak the way the delegates sign blocks. I presume that currently some kind of hash of the current block is taken and that hash is signed. Perhaps a chain ID (CHAIN_ID) is included in this hash as well before signing. Instead, each delegate would sign HASH(CHAIN_ID || CUR_BLOCK_HEIGHT || CUR_BLOCK_HEADER_HASH || PREV_ROUND_DELEGATES_HASH) where CUR_BLOCK_HEIGHT is the block height of the current block, and CUR_BLOCK_HEADER_HASH is the hash of the header of the current block. The block headers would include the hash of the body of the current block (CUR_BLOCK_BODY_HASH), the timestamp (CUR_BLOCK_TIME), the previous block header hash (PREV_BLOCK_HEADER_HASH), the signing delegate's random number committment for the next round (CUR_BLOCK_RANDOM_COMMIT), the signing delegate's random number reveal for the current round (CUR_BLOCK_RANDOM_REVEAL), and potentially a some other bits of informations (particularly SNAPSHOT_ROOT_HASH, SNAPSHOT_BLOCK_HEIGHT, and the threshold signature, as defined in the first paragraph, if we were to represent the database state as a Merkle Patricia tree).

PREV_ROUND_DELEGATES_HASH is calculated according to the following procedure: take the set of 101 active delegates as of the last block in the previous round; lexigraphically order the BTS addresses of the delegate accounts (the addresses corresponding to the private keys the delegates use to sign the blocks); and calculate the hash of the concatenation of these 101 BTS addresses in their lexigraphical order, which is called PREV_ROUND_DELEGATES_HASH.

It now becomes possible for someone to provide the up to 101 signatures within a given round along with their corresponding BLOCK_HEADER_HASH, the BLOCK_HEIGHT, the BLOCK_TIME, the BLOCK_HEADER_HASH of the block at the end of the round prior to the given round, the CHAIN_ID, and the set of 101 active delegates at the end of the round prior to the given round, and anyone can prove that the claimed delegates of the current round signed off that they are the valid active delegates of that round. This may seem useless because it appears to be circular reasoning, but if any of these delegates lied about this (provided signatures about fake claims that were not consistent with what they provided on the real live blockchain), the recipient of the proof would only need to have the signature of the bad delegate, CHAIN_ID, PREV_ROUND_DELEGATES_HASH, and the BLOCK_HEIGHT and BLOCK_HEADER_HASH of the block the delegate supposedly signed to prove to anyone else that the delegate double signed. Thus, if these signatures belonged to delegates that were active delegates in the present time, one can be fairly confident that they are not providing fake claims because otherwise it would be very easy for the proof recipient to get them fired.

The lightweight client keeps track of the set of active delegates, S, as of block N that it believes it knows is true for some particular value of N. As it can later confidently update its belief about the set of active delegates, S', as of a newer block N' > N, it will replace its old knowledge with this new knowledge. So the amount of information to store in this regard remains constant instead of growing linearly with time. For the lightweight client to update its belief that the set of active delegates as of some block N' is S', it requires:
  • that N' be the block at the end of a round,
  • that it receive proof (the proof described in the previous paragraph) from M delegates in S' (ideally all 101) that claim the active delegates as of block N' are S',
  • and, those M delegates in S' make up at least 51% of the delegates in set S.
Each of the signatures (+ corresponding BLOCK_HEIGHT and BLOCK_HEADER_HASH and PREV_ROUND_DELEGATES_HASH) for a delegate that used to be in one of the sets S that the lightweight client trusted at one point but are no longer in the most recent set S', are placed in a database of "old delegate claims". The similar data for delegates that are currently in the set S' are placed in a database of "current delegate claims". The size of the "current delegate claims" database does not grow over time, but the size of "old delegate claims" does. As the "old delegate claims" database grows too big, the lightweight client is free to delete the signatures and corresponding data that have the smallest BLOCK_HEIGHTs (signatures from oldest blocks) until the database becomes manageable in size again. The idea is that the older these claims become the less likely the user is going to need them to prove double signs, since they would have likely already done so if the claims were in fact false. Furthermore, anytime the lightweight client is upgraded with a new built-in trusted checkpoint, all signatures and corresponding data that are from blocks older than that checkpoint can be removed from the "old delegate claims" database.

The above setup means that current delegates that want to make false claims can know that it is incredibly likely that their victims will have proof to get them fired (once they discover they were tricked) even if they collude to create a scenario where they were "voted out" and replaced by other delegates (that are actually fake delegates they control). However, the lightweight client user only gets protection from delegates that are active delegates in the present since they are the ones who have something to lose. If more than 50% of the delegates that were active by the time the lightweight client last synchronized their client were replaced by other delegates, those >50% delegates might collude to trick the lightweight client to believe in a new set S' the next time it syncs that is not the real present set of active delegate. The lightweight client will assume that the probability of this kind of turnover happening over a period of time less than the period that the lightweight client has been offline AND >50% of the now retired delegates deciding to collude with the lightweight client sync server to trick the lightweight client is very low that it can ignore that threat. What would typically happen if there was a period of rapid turnover while the lightweight client was offline is that the lightweight client sync server would be unable to find a proof for the lightweight client that satisfied the three bullet points above. This would trigger the lightweight client to demand the user to provide a trusted recent checkpoint in order to resync after a long period of being offline. For extra security, the lightweight client user may want to just provide a manual trusted checkpoint beforehand, without being prompted for it, if the user is synchronizing the client after being offline for a long time (months).

So the typical flow for a lightweight client syncing after some period of being offline is to first connect to a lightweight client sync server (which can be provided at no cost to users and paid for by delegates) which will get the lightweight client to update its set S to set of current active delegates (while storing the proofs necessary to get delegates fired in case they lie). It can also periodically communicate with this sync server to keep this set up-to-date while the client is running. The sync server can also provide statements signed by the delegates (the sync server can have a communication link to the delegate servers) that expire frequently (say every 30 minutes) and provide a list of mirror servers for each delegate with the corresponding public key responsible for that mirror server. This way lightweight clients can connect to servers run by the delegates that do not store the same private key that the delegate block signing server stores. If any of the mirror servers are compromised, the delegate's private key isn't compromised, the delegate can stop including that mirror server and corresponding public key in its signed statements, and lightweight clients will not be tricked into connecting to that compromised server within at most 30 minutes (likely sooner if a signed revocation statement reaches the lightclient via the sync server). The sync server does not need to do any other work. It does not provide the results to blockchain_* commands via the RPC. The delegate mirror servers are responsible for that.

While the delegate mirror servers could still lie in their responses to the RPC, it is assumed that they will not because they are known to be the servers of the current active delegates who are trusted by the stakeholders. Each RPC response could be signed by the private key associated with that mirror server. In theory, enough information would be provided to the lightweight clients to act as proof that either the delegate was being evil or their mirror server was compromised. Either way, if the delegate tried to be evil on a large scale or was just negligent and irresponsible, they would very likely be caught and voted out. Just the threat that the protocol allows lightweight clients to potential store their signed responses could be enough of a deterrent to force the delegates to behave, even if the official lightweight clients aren't built with that functionality to begin with. Also, for the sake of extra security, the lightweight client would need to get responses to their RPC from many of the active delegates and make sure they all say the same thing. Fluxer555 has discussed some of these ideas in this post.

While the level of security provided as described by the above paragraph would work adequately initially. Eventually, I would prefer something better. That is why I want the entire database state to be encoded as a Merkle Patricia tree (with the most recent database snapshot root hash and corresponding block height and threshold signature included in the block header) so that a single lightweight server that is not run by the delegates can provide log(N) proof of the existence of any value for a corresponding key within the database as of a given block. This makes it unnecessary for the lightweight client to locally store any additional proofs, other than the ones needed to securely sync to the most recent set S, and yet still be confident about the validity of the RPC responses from a single untrusted lightweight server.

I admit this is not a fully fleshed out idea/proposal. But I wanted to see if I can get some feedback on it from devs.

A while ago on this GitHub thread, I said the following:
Finally, I was thinking about an idea to possibly reduce this 8 minute delay. Even though all the delegates should typically validate a given block N submitted at time T in less than 10 seconds after the block was submitted, the delegate signing block N + M has to wait until time T + M*(10 seconds) to provide the signature the adds their approval to all transactions in the blocks that came before it. This is what leads to the minimum 8 minute delay for getting more than 50% of delegates to approve of a transaction before the receiver can feel confident it does not exist on a minority fork. What if an active delegate was allowed to optionally submit a signature approving of a recent block before their designated time slot, sort of like they are allowed to optionally provide price feeds. They wouldn't be producing these blocks, the delegate producing the block could simply include their signatures into the block (only a maximum of 100 such signatures could ever be included per block since the block producer would only include at most one signature from each active delegate other than itself, which would be the signature by the delegate for the most recent block that delegate has signed that wasn't already included in a previous block in the chain). If all delegates submit a signature of the most recent block that they validated to the network (or at least to the private active delegate network), the user's clients could know a transaction is part of the majority-consensus chain only 10-30 seconds after the transaction was submitted.

The goal is to not have to wait for 17 minutes to get a "checkpoint" by most of the delegates but to ideally get these checkpoints every 10 seconds. Each of the delegates already knows whether the previous block is valid or not. But they cannot add their signature to it which shows their approval of the block until it is time to make their designated block which can be up to 16 minutes later! So why not allow the delegates to publicly contribute this information as soon as possible?

Well one objection raised by theoretical is that "we don't want to bloat every block with a bunch of extra delegate sigs." While, I don't think 100 extra sigs per block is that big of a deal in the larger scheme of things, it is a valid point.

So this is my new idea.

A (t, n) threshold signature scheme allows n participants to work together to generate a common shared random secret x (which acts as the private key to a public key Y that is known to all parties) in which none of the participants know the value of x but each of them have their own secret shares that allow them to create signatures using x, and specifically allows any t of the n participants to work together using their secret shares to generate a signature of any arbitrary message that can be verified to be signed by the private key x (without ever exposing x). Because of certain limitations on the value of t (t <= (n+1)/2) when this scheme is used with ECDSA, it is useful to implement this threshold signature scheme with Schnorr signatures instead if that choice is available.

So, I think the delegates should work together to generate a threshold digital signature for Schnorr signatures of the hash of the previous block and submit that in every block. This is just one extra signature added to every block. All of this communication happens out-of-band (not over the blockchain or even the main p2p network, but on the private delegate network). Since there are only 101 delegates and there is only one message (the previous block hash) to sign every 10 seconds, I imagine this adds barely any computational overhead. Because of the nature of threshold signatures not all of the delegates would need to cooperate for signature generation to be successful. I can imagine a 81-of-101 signature scheme working well. This can be comparable to how Ripple requires consensus by 80% of its rippled nodes in the UNL in order to close a ledger. And in the worse case scenario if a threshold could not be achieve by the time the block would need to be produced, it just means that the checkpoint is delayed a little bit.

I also want to be clear that each block still has a specific delegate that is responsible for generating it and signing off on it. Nothing changes there. There is clear responsibility for tardy or inactive delegates. It is just that the signature generated by participation of at least 80% of the active delegates can be added by the block producing delegate in order to give users instant (every 10 seconds) checkpoints instead of the current system of checkpoints every 17 minutes.

There is another benefit here as well. In the process of generating the threshold signature every block, the delegates end up computing a random number V (which needs to be included as part of the signature for signature verification) that incorporates the entropy of each of the participants. This random number can be used as a source of entropy for the blockchain that updates every 10 seconds. Currently, if you want a stream entropy that has the decentralization of nearly 101 delegates behind it, one must wait for a new random number every 16 minutes. Having this much faster stream of entropy improves the usability of certain blockchain applications, in particular games.

One other thing to mention about this proposal is that the public key Y representing the 101 delegates needs to be known to the public so that they can actually verify the signatures. In particular, they need to know that the public key claimed to be the one generated by the 101 delegates for this purpose really is that one. Every delegate that produces a block would only add their block to a chain claiming that the latest shared public key is Y if it actually was. This means that one needs to wait for a round (16 minutes) to finish before they know they can trust the claimed public key as of the end of the previous round. Every time delegates get removed from the top 101 ranks and new ones replace them, the new set of active delegates need to run through the key generation protocol again and generate a new public key Y. That Y is then committed to the block chain and once enough of the delegates confirm it by adding blocks on top of the block with that update to Y, the threshold signatures can switch to using signatures that are verified by that new Y. Since delegate turnover should be relatively slow, the delegates that weren't recently hired should still be more than 80% of the 101 active delegates and therefore still should be able to produce signatures correspond to the old Y until they are ready to switch over to signatures corresponding to the new Y. So threshold signatures can be submitted every 10 blocks with the same level as security even as the delegates in the top 101 ranks change.

I also have a proposal that we should implement Schnorr signatures more generally throughout the blockchain as an alternative to ECDSA, and in particular for the issuers/managers of UIAs which makes the integration of side chain DACs easier, but I will leave that one alone for now.

Technical Support / 0.5.1 and 0.5.3 are unusable for me
« on: January 24, 2015, 03:33:38 am »
I have tried both building 0.5.3 myself and using the pre-compiled 0.5.1 binaries. The results are the same: the clients are totally unusable.

The CLI client of both versions still has the unbearable typing lag I mentioned last time. Running it with --server and using the web wallet doesn't help. It is very slow. The RPC POST requests have an average latency of around 2 minutes and some of them timeout with an error!

The Qt client isn't better. The 0.5.3 Qt client is very slow and laggy and sometimes won't ever load pages like the market. In fact it completely froze my Ubuntu system (hard reset required) when I tried to load the market in an attempt to cover a short! The performance of the 0.5.1 Qt client was actually pretty decent, but I quickly realized the only reason that was the case was because the blockchain wasn't syncing at all beyond the point left by the CLI client (a problem I mentioned last time with v0.4.27.2), meaning its completely worthless.

So basically there is no properly functioning client for me at the moment. I only have 4 GB of RAM on the laptop so I didn't expect super snappy performance with the full client, but this is just ridiculous. It has to be some bug in the client right? Or is my computer somehow messed up?

If the price of BTS drops to 0.935 cents, many of the remaining grandfathered shorts would be forced to cover.

Just looking at the BitUSD/BTS market, there are currently approximately 74,734 BitUSD owed by grandfathered shorts. If the price of BTS hits $0.00935/BTS, then all but 5 of these short positions will have to have been covered one way or another (manually or margin called), leaving at most 1,658 BitUSD owed by the remaining 5 grandfathered short positions.

This should then help the market peg even more.

Sorry to those shorts if this does in fact happen.  :(

Edit: Fixed decimal places.

Technical Support / Blockchain doesn't download on v0.4.27.2 Qt client
« on: January 04, 2015, 04:03:26 am »
I upgraded to v0.4.27.2 from v0.4.27.1. The Qt client wasn't synchronizing the blockchain properly so I used the CLI client. The CLI client did download and index the blockchain up to the present. I then switched over to the Qt client and it appeared to be synchronized to the present and working fine for a moment. But then I realized that it wasn't updating the blockchain and just started getting further and further behind from the present.

I switched back to the v0.4.27.1 Qt client (which had its bugs but at least synchronized the blockchain) and surprisingly it too had the same blockchain synchronizing problems as the v0.4.27.2 Qt client.

I then tried deleting the chain directory and downloading the blockchain from scratch using the v0.4.27.2 CLI client (which took a long time by the way). I then loaded the v0.4.27.2 Qt client on this new chain folder and the same problem of it not updating the blockchain and falling behind persisted. However, this time when I switched back to the v0.4.27.1 Qt client, the client went back to properly synchronizing the blockchain again as it did before the "upgrade". So, I'm just sticking with v0.4.27.1 for now (despite the annoying lags described previously) because at least it functions.

I am not sure what other information I can provide to help track this bug down. Has anyone else experienced this problem?

I believe the two week fee to register as a delegate is to allow stakeholders enough time to kick out (by voting enough for other honest delegates) a malicious delegate that votes themselves into the top 101 ranks and collects diluted pay without contributing anything to the ecosystem, is that correct?

First, I don't really think that would be necessary if we separated delegates from workers so that delegates only got a standard pay, and if the approval percantage of the 101th delegate was high enough. The potential gain from voting oneself into the 101th position to collect a low/medium pay (just enough for server costs that they would not bother actually paying for) for a long enough time before people voted the delegate out would be negligible compared to the amount of stake necessary to vote the delegate into the 101th position in the first place. And if they are doing a good job to not get voted out, then they are a good delegate and there is no issue. The separate workers would have their own quorum requirements to get hired, and those could be sufficiently high enough that we shouldn't realistically have to worry about a rich attacker using that as a method to get "free money" for no work.

I also think that the two week fee wouldn't really be necessary even with the coupled delegate/worker system we have today, if we simply changed the pay mechanics for the delegates such that they needed substantial approval (meaning more than the percentage of stake to take over the network, i.e. the approval percentage of the 51st delegate) in order to receive a pay substantially larger than the basic salary necessary to just run a block signing node. I gave an example of such a delegate pay system in this thread.

But this proposal is not to argue for either of the above two systems but rather a much simpler system. I think that there should only be a small fixed fee to register as a delegate, which should not depend on the requested pay rate. However, the delegate would be required to produce their first 1200 blocks (corresponds to a period of time approximately equal to 2 weeks) without getting any reward. Actually, I would prefer if the number was instead 600 blocks (or 1 week) since I think 2 weeks is excessive. These blocks would have to be produced over a period of time in which the delegate remained an active delegate. If the delegate ever dipped below rank 101, the count would be reset. After the necessary number of blocks were produced by the delegate, their probationary period would end and they would then be able to collect the requested reward for each valid block they produce from that point forward. A slightly alternative system would be to pay the delegate a basic reward, e.g. 3% max pay, instead of no pay during the probationary period. Then after the probationary period, they would get their full requested pay.

The main purpose of this proposal is to prevent delegate candidates from losing substantial funds if they fail to ever get enough approval to become an active delegate. This lowers the barriers to entry for potentially great workers who want to earn dilution pay as a delegate but don't want to risk it since they are not sure if they can get enough stakeholder support. However, because of the probationary period, it also means that anyone with a lot of stake cannot profit by voting their 100% pay delegate into the top 101 to collect some "free" money before the stakeholders realize what is happening and vote for enough honest delegates to kick out the bad delegate. The probationary period means that there isn't any dilution damage done during the 2 week (or even 1 week) period that stakeholders need to react to such attacks. And ideally, experiences from such attacks would teach stakeholders to keep the 101th delegate approval percentage high enough so that such attacks are not even possible in the future.

BTW, on a different but somewhat related note, I don't think RDPOS should vote for a >3% pay delegate because they are in the delegate slate of someone you gave the thumbs up to unless you also explicitly gave a thumbs up to that >3% pay delegate. Of course, depending on a hard-coded fixed number like 3% is not a good idea. So a smarter implementation would be necessary. The easiest to implement solution would be to allow the user to just adjust that number in the preferences of their local client.

Pages: [1] 2